Skip to content

Commit

Permalink
Merge branch 'release/v0.1'
Browse files Browse the repository at this point in the history
  • Loading branch information
rconway committed Aug 6, 2020
2 parents 36dcfed + ff174b9 commit 2665b60
Show file tree
Hide file tree
Showing 50 changed files with 419 additions and 328 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ The EOEPCA system deployment comprises several steps. Instructions are provided

The first step is to clone this repository to your local platform...
```
$ git clone --branch v0.1 git@github.com:EOEPCA/eoepca.git
$ git clone --branch v0.1.1 git@github.com:EOEPCA/eoepca.git
```
NOTE that this clones the specific tag that is well tested. For the latest development branch the `--branch` option should be omitted.

Expand Down Expand Up @@ -124,6 +124,7 @@ Not started yet

EOEPCA system releases are made to provide integrated deployments of the developed building blocks. The release history is as follows:

* 06/08/2020 - [Release 0.1.1](release-notes/release-0.1.1.md)
* 22/06/2020 - [Release 0.1](release-notes/release-0.1.md)

<!-- ISSUES -->
Expand Down
2 changes: 1 addition & 1 deletion bin/install-terraform.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ if ! unzip --help >/dev/null 2>&1
then
sudo apt-get -y install unzip
fi
curl -sLo terraform.zip https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
curl -sLo terraform.zip https://releases.hashicorp.com/terraform/0.12.29/terraform_0.12.29_linux_amd64.zip
unzip terraform.zip
rm -f terraform.zip
chmod +x terraform
Expand Down
4 changes: 2 additions & 2 deletions creodias/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Terraform must be installed. See [terraform website](https://www.terraform.io/)

Alternatively, use helper script [install-terraform.sh](../bin/install-terraform.sh)...
```
$ ../bin/install-terraform.sh
$ bin/install-terraform.sh
```

## OpenStack Client
Expand Down Expand Up @@ -53,7 +53,7 @@ The clouds.yaml must be placed in one of the following locations:

## Deployment Configuration

Before initiating deployment, the file [eoepca.tfvars](./eoepca.tfvars) should be tailored to fit the specific needs of your target environment.
Before initiating deployment, the file [creodias/eoepca.tfvars](./eoepca.tfvars) should be tailored to fit the specific needs of your target environment.

## Initiate Deployment

Expand Down
7 changes: 4 additions & 3 deletions kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ RKE must be installed. See [Rancher website](https://rancher.com/products/rke/)

Alternatively, use helper script [install-rke.sh](../bin/install-rke.sh)...
```
$ ../bin/install-rke.sh
$ bin/install-rke.sh
```

## RKE Configuration
Expand All @@ -35,7 +35,8 @@ The helper script [create-cluster-config.sh](create-cluster-config.sh) automatic
* configuration of connection via bastion

```
$ ./create-cluster-config.sh
$ cd kubernetes
$ create-cluster-config.sh
```

## Create Kubernetes Cluster
Expand Down Expand Up @@ -73,7 +74,7 @@ NOTE that, in order to use kubectl from your local platform, it is necessary to

## Access via Bastion host

For administration the deployment VMs must be accessed through the bastion host (via its public floating IP). The default deployment installs the public key of the user as an authorized key in each VM to facilitate this. Further information [here](../creodias/README.md#access_via_bastion_host).
For administration the deployment VMs must be accessed through the bastion host (via its public floating IP). The default deployment installs the public key of the user as an authorized key in each VM to facilitate this. Further information [here](../creodias/README.md#access-via-bastion-host).

The ssh connection to the bastion can be used to establish a VPN from your local platform to the cluster using [sshuttle](https://sshuttle.readthedocs.io/en/stable/), for example...
```
Expand Down
10 changes: 8 additions & 2 deletions minikube/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ For k8s cluster adminstration the kubectl command must be installed. See [Kubern

Alternatively, use helper script [install-kubectl.sh](../bin/install-kubectl.sh)...
```
$ ../bin/install-kubectl.sh
$ bin/install-kubectl.sh
```

## Install minikube
Expand All @@ -19,7 +19,13 @@ Minikube can be installed by following the instructions on the [Minikube website

Alternatively, use helper script [setup-minikube.sh](./setup-minikube.sh) to download and install Minikube...
```
$ ./setup-minikube.sh
$ minikube/setup-minikube.sh
```

NOTE for running minikube in a VM...<br>
The setup-minikube.sh script retains the default (preferred) dpeloyment of minikube as a docker container. This is not ideal if running minikube inside a VM. In this case it is better to run minikube natively inside VM using the 'none' driver, rather than the 'docker' driver. This can be achieved by running the script as follows...
```
$ minikube/setup-minikube.sh native
```

## Next Steps
Expand Down
31 changes: 26 additions & 5 deletions minikube/setup-minikube.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,33 @@ mkdir -p $HOME/.local/bin

# minikube: download and install locally
echo "Download minikube..."
curl -sLo $HOME/.local/bin/minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
curl -sLo $HOME/.local/bin/minikube https://github.com/kubernetes/minikube/releases/download/v1.12.1/minikube-linux-amd64 \
&& chmod +x $HOME/.local/bin/minikube

# start minikube
# - default container runtime is docker - see https://minikube.sigs.k8s.io/docs/handbook/config/#runtime-configuration
echo "Start minikube, and wait for cluster..."
minikube start --addons ingress --wait "all"
# If MINIKUBE_MODE is not set, and USER is vagrant, deduce we are running in a VM, so use 'native' mode
MINIKUBE_MODE="$1"
if [ -z "${MINIKUBE_MODE}" -a "${USER}" = "vagrant" ]; then MINIKUBE_MODE="native"; fi

# minikube (native)
if [ "${MINIKUBE_MODE}" = "native" ]
then
if hash conntrack 2>/dev/null
then
# start minikube
# - default container runtime is docker - see https://minikube.sigs.k8s.io/docs/handbook/config/#runtime-configuration
echo "Start minikube (native), and wait for cluster..."
export CHANGE_MINIKUBE_NONE_USER=true
sudo -E $HOME/.local/bin/minikube start --driver=none --addons ingress --wait "all"
else
echo "ERROR: conntrack must be installed for minikube driver='none', e.g. 'sudo apt install conntrack'. Aborting..."
exit 1
fi
# minikube docker
else
# start minikube
# - default container runtime is docker - see https://minikube.sigs.k8s.io/docs/handbook/config/#runtime-configuration
echo "Start minikube (default), and wait for cluster..."
minikube start --addons ingress --wait "all"
fi

echo "...READY"
9 changes: 9 additions & 0 deletions release-notes/release-0.1.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# EOEPCA System - Release 0.1.1

Release 0.1.1 is a minor version release that includes system-level integration and deployment fixes back-ported from the main development branch.

The scope & functionality, and hence the component versions, are unchanged from release 0.1 whose description is in the [Release 0.1 Release Note](release-0.1.md).

## Further Information

For further project information, including details of how to make a deployment of the EOEPCA system, please see the [main project page](../README.md).
11 changes: 3 additions & 8 deletions terraform/global/proc-ades/dependencies.tf
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
resource "null_resource" "waitfor-login-service" {
depends_on = [ var.module_depends_on ]
provisioner "local-exec" {
command = <<EOT
until [ `kubectl logs service/oxauth | grep "Server:main: Started" | wc -l` -ge 1 ]; do echo "Waiting for Login Service" && sleep 30; done
EOT
}
}
resource "null_resource" "waitfor-module-depends" {
depends_on = [var.module_depends_on]
}
4 changes: 2 additions & 2 deletions terraform/global/proc-ades/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ resource "kubernetes_deployment" "ades" {
app = "ades"
}
}
depends_on = [ var.module_depends_on, null_resource.waitfor-login-service ]
depends_on = [null_resource.waitfor-module-depends]

spec {
replicas = 1
Expand Down Expand Up @@ -62,7 +62,7 @@ resource "kubernetes_service" "ades" {
app = "ades"
}
}
depends_on = [ var.module_depends_on, null_resource.waitfor-login-service ]
depends_on = [kubernetes_deployment.ades]

spec {
port {
Expand Down
11 changes: 3 additions & 8 deletions terraform/global/rm-workspace/dependencies.tf
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
resource "null_resource" "waitfor-login-service" {
depends_on = [ var.module_depends_on ]
provisioner "local-exec" {
command = <<EOT
until [ `kubectl logs service/oxauth | grep "Server:main: Started" | wc -l` -ge 1 ]; do echo "Waiting for Login Service" && sleep 30; done
EOT
}
}
resource "null_resource" "waitfor-module-depends" {
depends_on = [var.module_depends_on]
}
19 changes: 17 additions & 2 deletions terraform/global/rm-workspace/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ resource "kubernetes_deployment" "workspace" {
app = "workspace"
}
}
depends_on = [ var.module_depends_on, null_resource.waitfor-login-service ]
depends_on = [null_resource.waitfor-module-depends]

spec {
replicas = 1
Expand Down Expand Up @@ -75,7 +75,7 @@ resource "kubernetes_service" "workspace" {
app = "workspace"
}
}
depends_on = [ var.module_depends_on, null_resource.waitfor-login-service ]
depends_on = [kubernetes_deployment.workspace]

spec {
port {
Expand All @@ -91,4 +91,19 @@ resource "kubernetes_service" "workspace" {

type = "NodePort"
}

provisioner "local-exec" {
command = <<-EOT
interval=$(( 5 ))
msgInterval=$(( 30 ))
step=$(( msgInterval / interval ))
count=$(( 0 ))
until kubectl logs service/workspace 2>/dev/null | grep "Nextcloud was successfully installed" >/dev/null 2>&1
do
test $(( count % step )) -eq 0 && echo "Waiting for service/workspace"
sleep $interval
count=$(( count + interval ))
done
EOT
}
}
1 change: 0 additions & 1 deletion terraform/global/rm-workspace/workspace-ingress.tf
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,3 @@ resource "kubernetes_ingress" "workspace" {
}
}
}

29 changes: 26 additions & 3 deletions terraform/global/storage/processing.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "kubernetes_persistent_volume" "eoepca_proc_pv" {
count = "${var.nfs_server_address == "none" ? 0 : 1}"
count = "${var.storage_class == "eoepca-nfs" ? 1 : 0}"
metadata {
name = "eoepca-proc-pv"
labels = {
Expand All @@ -15,7 +15,30 @@ resource "kubernetes_persistent_volume" "eoepca_proc_pv" {
persistent_volume_source {
nfs {
server = var.nfs_server_address
path = "/data/proc"
path = "/data/proc"
}
}
}
}

resource "kubernetes_persistent_volume" "eoepca_proc_pv_host" {
count = "${var.storage_class == "eoepca-nfs" ? 0 : 1}"
metadata {
name = "eoepca-proc-pv-host"
labels = {
eoepca_type = "proc"
}
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
capacity = {
storage = "5Gi"
}
persistent_volume_source {
host_path {
path = "/kubedata/proc"
type = "DirectoryOrCreate"
}
}
}
Expand All @@ -30,7 +53,7 @@ resource "kubernetes_persistent_volume_claim" "eoepca_pvc" {
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
access_modes = ["ReadWriteMany"]
resources {
requests = {
storage = "3Gi"
Expand Down
29 changes: 26 additions & 3 deletions terraform/global/storage/resource-management.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "kubernetes_persistent_volume" "eoepca_resman_pv" {
count = "${var.nfs_server_address == "none" ? 0 : 1}"
count = "${var.storage_class == "eoepca-nfs" ? 1 : 0}"
metadata {
name = "eoepca-resman-pv"
labels = {
Expand All @@ -15,7 +15,30 @@ resource "kubernetes_persistent_volume" "eoepca_resman_pv" {
persistent_volume_source {
nfs {
server = var.nfs_server_address
path = "/data/resman"
path = "/data/resman"
}
}
}
}

resource "kubernetes_persistent_volume" "eoepca_resman_pv_host" {
count = "${var.storage_class == "eoepca-nfs" ? 0 : 1}"
metadata {
name = "eoepca-resman-pv-host"
labels = {
eoepca_type = "resman"
}
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
capacity = {
storage = "5Gi"
}
persistent_volume_source {
host_path {
path = "/kubedata/resman"
type = "DirectoryOrCreate"
}
}
}
Expand All @@ -30,7 +53,7 @@ resource "kubernetes_persistent_volume_claim" "eoepca_resman_pvc" {
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
access_modes = ["ReadWriteMany"]
resources {
requests = {
storage = "3Gi"
Expand Down
29 changes: 26 additions & 3 deletions terraform/global/storage/user-management.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "kubernetes_persistent_volume" "eoepca_userman_pv" {
count = "${var.nfs_server_address == "none" ? 0 : 1}"
count = "${var.storage_class == "eoepca-nfs" ? 1 : 0}"
metadata {
name = "eoepca-userman-pv"
labels = {
Expand All @@ -15,7 +15,30 @@ resource "kubernetes_persistent_volume" "eoepca_userman_pv" {
persistent_volume_source {
nfs {
server = var.nfs_server_address
path = "/data/userman"
path = "/data/userman"
}
}
}
}

resource "kubernetes_persistent_volume" "eoepca_userman_pv_host" {
count = "${var.storage_class == "eoepca-nfs" ? 0 : 1}"
metadata {
name = "eoepca-userman-pv-host"
labels = {
eoepca_type = "userman"
}
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
capacity = {
storage = "5Gi"
}
persistent_volume_source {
host_path {
path = "/kubedata/userman"
type = "DirectoryOrCreate"
}
}
}
Expand All @@ -30,7 +53,7 @@ resource "kubernetes_persistent_volume_claim" "eoepca_userman_pvc" {
}
spec {
storage_class_name = var.storage_class
access_modes = ["ReadWriteMany"]
access_modes = ["ReadWriteMany"]
resources {
requests = {
storage = "3Gi"
Expand Down
Loading

0 comments on commit 2665b60

Please sign in to comment.