Skip to content

Commit

Permalink
Merge pull request #269 from syseleven/acunin/various-fixes
Browse files Browse the repository at this point in the history
Various fixes
  • Loading branch information
Adri2000 authored Feb 14, 2024
2 parents e0965bc + 51053a4 commit 656c192
Show file tree
Hide file tree
Showing 32 changed files with 100 additions and 100 deletions.
2 changes: 1 addition & 1 deletion user/pages/01.syseleven-stack/default.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Our documentation is structured in four distinct sections:

## Examples

On top of the documentation, SysEleven provides a [library of heat templates](https://github.com/syseleven/heat-examples) as well as a [library of terraform examples](https://github.com/syseleven/terraform-examples) that will help you with the creation of a complex setup.
On top of the documentation, SysEleven provides a [library of heat templates](https://github.com/syseleven/heat-examples) as well as a [library of Terraform examples](https://github.com/syseleven/terraform-examples) that will help you with the creation of a complex setup.

## Support

Expand Down
2 changes: 1 addition & 1 deletion user/pages/02.Tutorials/02.api-access/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ In this example you see a network pool which provides us with Floating IP addres

## Using infrastructure templates

Now you can use the OpenStack command line tools to control all the infrastructure components of the SysEleven Stacks (i.e., networks, security groups, virtual machines). To automate this, you can use Heat templates which are a structured representation of your setups. SysEleven provides examples that work with the SysEleven Stack on Github. Feel free to check them out!
Now you can use the OpenStack command line tools to control all the infrastructure components of the SysEleven Stacks (i.e., networks, security groups, virtual machines). To automate this, you can use Heat templates which are a structured representation of your setups. SysEleven provides examples that work with the SysEleven Stack on GitHub. Feel free to check them out!

Example setups can be copied as follows:

Expand Down
10 changes: 5 additions & 5 deletions user/pages/02.Tutorials/05.lbaas/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,16 +27,16 @@ Below you will find two tutorials in two variants: how to set up an HTTP load ba

## Git repository with Terraform examples

The Terraform examples used in the tutorials are [available on Github](https://github.com/syseleven/terraform-examples/tree/master/lbaas)
The Terraform examples used in the tutorials are [available on GitHub](https://github.com/syseleven/terraform-examples/tree/master/lbaas)

```shell
git clone https://github.com/syseleven/terraform-examples.git
```

This repository is used in both setups described below:

* terraform-examples/lbaas-octavia-http: contains the Terraform receipe for an HTTP load balancer set up using Octavia resources
* terraform-examples/lbaas: contains the terraform template for a TCP load balancer set up using Neutron LBaaSv2 resources
* terraform-examples/lbaas-octavia-http: contains the Terraform recipe for an HTTP load balancer set up using Octavia resources
* terraform-examples/lbaas: contains the Terraform template for a TCP load balancer set up using Neutron LBaaSv2 resources

## HTTP Load Balancer with Terraform and Octavia

Expand Down Expand Up @@ -100,7 +100,7 @@ Outputs:
loadbalancer_http = "http://185.56.128.100"
```

Note that the "Allowed CIDRs" of the listeners in the example are already set to a value (here 0.0.0.0/0). This is in contrast to Heat, where you have to set them in a separate step. The security groups are also configured in the terraform receipe.
Note that the "Allowed CIDRs" of the listeners in the example are already set to a value (here 0.0.0.0/0). This is in contrast to Heat, where you have to set them in a separate step. The security groups are also configured in the Terraform recipe.

### Step two: Check if the load balancer works properly

Expand Down Expand Up @@ -174,7 +174,7 @@ Open AnyApp in other tabs/windows to see the load balancer working.

## Git repository with Heat template examples

The heat template examples used in the tutorials are available [on Github](https://github.com/syseleven/heat-examples).
The heat template examples used in the tutorials are available [on GitHub](https://github.com/syseleven/heat-examples).

```shell
git clone https://github.com/syseleven/heat-examples.git
Expand Down
2 changes: 1 addition & 1 deletion user/pages/02.Tutorials/06.dependencies/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,4 +102,4 @@ syselevenstack@kickstart:~/heat-examples/example-setup$ openstack stack create -

## Examples

We provided an [example setup](https://github.com/syseleven/heat-examples/tree/master/example-setup) on github which uses dependencies.
We provided an [example setup](https://github.com/syseleven/heat-examples/tree/master/example-setup) on GitHub which uses dependencies.
20 changes: 10 additions & 10 deletions user/pages/02.Tutorials/07.affinity/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ taxonomy:

## Goal

* This tutorial shows howto distribute instances to different hosts using servergroups
* This tutorial shows howto distribute instances to different hosts using server groups
* It is also shown how to force instances on the same host

## Prerequisites
Expand All @@ -21,7 +21,7 @@ taxonomy:

## Problem

By default, there is no guarantees wether servers will be distributed across different hypervisors. The Nova compute scheduler makes that decision based on available resources.
By default, there is no guarantees whether servers will be distributed across different hypervisors. The Nova compute scheduler makes that decision based on available resources.
This can lead to services that are meant to be highly available to share a common host and thus share a single point of failure.
Inversely it might be desired to have two services to be located as close as possible, because they will need high bandwidth between each other.
Both cases are solvable using ServerGroups. That way you can influence the distribution of instances.
Expand Down Expand Up @@ -196,15 +196,15 @@ openstack server show server_1 -c name -c hostId
+--------+----------------------------------------------------------+
```

## Link Ressources/Sources
## Link Resources/Sources

* [Heat Template Guide](http://docs.openstack.org/developer/heat/template_guide/index.html)
* [Heat Template ServerGroup Resource](http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::ServerGroup)
* [Nova Scheduler Reference](http://docs.openstack.org/mitaka/config-reference/compute/scheduler.html)
* [Nova Scheduler Affinity Filter](http://docs.openstack.org/mitaka/config-reference/compute/scheduler.html#servergroupaffinityfilter)
* [OpenStack Client Server Create](http://docs.openstack.org/developer/python-openstackclient/command-objects/server.html#server-create)
* [OpenStack Client ServerGroup](http://docs.openstack.org/developer/python-openstackclient/command-objects/server-group.html)
* [Heat Template Guide](https://docs.openstack.org/developer/heat/template_guide/index.html)
* [Heat Template ServerGroup Resource](https://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::ServerGroup)
* [Nova Scheduler Reference](https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html)
* [Nova Scheduler Affinity Filter](https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html#servergroupaffinityfilter)
* [OpenStack Client Server Create](https://docs.openstack.org/developer/python-openstackclient/command-objects/server.html#server-create)
* [OpenStack Client ServerGroup](https://docs.openstack.org/developer/python-openstackclient/command-objects/server-group.html)

## Links/Examples

The templates published on github [affinity](https://github.com/syseleven/heat-examples/tree/master/affinity) contain examples for *affinity* und *anti-affinity*.
The templates published on GitHub [affinity](https://github.com/syseleven/heat-examples/tree/master/affinity) contain examples for *affinity* and *anti-affinity*.
24 changes: 12 additions & 12 deletions user/pages/02.Tutorials/08.local-storage/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,51 +11,51 @@ taxonomy:

## Objective

This tutorial aims to enable you to make use of the local ssd ephemeral storage provided as an alternative to the default distributed ephemeral storage in SysEleven Stack.
This tutorial aims to enable you to make use of the local SSD ephemeral storage provided as an alternative to the default distributed ephemeral storage in SysEleven Stack.

## Prerequisites

* You should be able to use simple heat templates, like shown in the [first steps tutorial](../01.firststeps/docs.en.md).
* You know the basics of using the [OpenStack CLI-Tools](../../03.Howtos/02.openstack-cli/docs.en.md).
* Environment variables are set, like shown in the [API-Access-Tutorial](../02.api-access/docs.en.md).

## How to setup an instance with local ssd storage
## How to setup an instance with local SSD storage

There are two ways to achieve this goal and we show both, beginning with the quickest one.

### Use our heat-example for a single server using local ssd as epehemeral storage
### Use our heat-example for a single server using local SSD as ephemeral storage

You will be working with the [heat examples repository](https://github.com/syseleven/heat-examples) on Github. Your first step is to clone it:
You will be working with the [heat examples repository](https://github.com/syseleven/heat-examples) on GitHub. Your first step is to clone it:

```shell
git clone https://github.com/syseleven/heat-examples
cd heat-examples/single-server-on-local-storage
```

Now you can create the example stack for local ssd storage:
Now you can create the example stack for local SSD storage:

```shell
openstack stack create -t example.yaml local-storage-example-stack -e example-env.yaml --parameter key_name=<ssh key name> --wait
```

In this command, `key_name` references an SSH-Key that you created in the [SSH Tutorial](../../03.Howtos/01.ssh-keys/docs.en.md).

You have now created a very basic server with its ephemeral storage on local ssd.
You have now created a very basic server with its ephemeral storage on local SSD.

### Use another tutorial or heat-example

You can use any other tutorial or heat-example and modify it to use local ssd storage instead of distributed storage.
That does not always make sense, since not all workloads profit from local ssd storage, but it is in principle possible.
You can use any other tutorial or heat-example and modify it to use local SSD storage instead of distributed storage.
That does not always make sense, since not all workloads profit from local SSD storage, but it is in principle possible.
Just follow the instructions to the point right before `openstack stack create` gets executed.
Edit the stack file(s) and substitute the `m1.*` flavor with the correspondig `l1.*` flavor.
If you then continue to create the stack, the server(s) will be created using local ssd storage as ephemeral storage.
Edit the stack file(s) and substitute the `m1.*` flavor with the corresponding `l1.*` flavor.
If you then continue to create the stack, the server(s) will be created using local SSD storage as ephemeral storage.
This does not apply to attached volumes, that will [continue to use distributed storage](../../05.Background/02.local-storage/docs.en.md#can-i-combine-local-ssd-storage-with-distributed-storage).

## Implications

See our [Background article about Local SSD Storage](../../05.Background/02.local-storage/docs.en.md) for more information about the implications of using local ssd storage.
See our [Background article about Local SSD Storage](../../05.Background/02.local-storage/docs.en.md) for more information about the implications of using local SSD storage.

## Summary / Conclusion

You now have created a basic server with local ssd storage and learned, how to modify other tutorials to make use of local ssd storage.
You now have created a basic server with local SSD storage and learned, how to modify other tutorials to make use of local SSD storage.
You should now be able to do anything you were able to do with distributed storage, with local storage as well.
12 changes: 6 additions & 6 deletions user/pages/02.Tutorials/09.dnsaas/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ taxonomy:

OpenStack's Designate provides a Domain Name Service as a service (DNSaaS).
This means that zones and records can be configured within OpenStack via an API (also see the [reference guide](../../04.Reference/07.dns/docs.en.md)) and will be queryable via DNS protocol from public nameservers run by SysEleven.
This functionality can be very handy to automate updates to dns records caused by failovers or deployments, it also facilitatess automated renewal of letsencrypt ssl certificates.
This functionality can be very handy to automate updates to dns records caused by failovers or deployments, it also facilitates automated renewal of Let's Encrypt SSL certificates.

The tutorial is intended to make you familiar with the main functionality and aware of the presence of more advanced features of the DNS service of the SysEleven Stack.
It shows examples for the openstack CLI, discovering the functionality in the [GUI](https://cloud.syseleven.de/horizon/project/dnszones/) is left as an exercise to the user.
The API can also be used with [Terraform](https://www.terraform.io/docs/providers/openstack/r/dns_zone_v2.html), have a look at our [terraform examples](https://github.com/syseleven/terraform-examples).
The API can also be used with [Terraform](https://www.terraform.io/docs/providers/openstack/r/dns_zone_v2.html), have a look at our [Terraform examples](https://github.com/syseleven/terraform-examples).

### Prerequisites

Expand Down Expand Up @@ -73,7 +73,7 @@ The zone name can be any publicly available domain name or subdomain name. Top l

#### Create a secondary (slave) zone

To create a secondary or slave zone, thats content are actually managed by (and obtained from) the primary or master server, you need to specify the master server(s):
To create a secondary or slave zone, whose content is actually managed by (and obtained from) the primary or master server, you need to specify the master server(s):

```shell
$ openstack zone create --type SECONDARY --masters 123.45.67.89 -- secondary.domain.example.
Expand Down Expand Up @@ -102,7 +102,7 @@ $ openstack zone create --type SECONDARY --masters 123.45.67.89 -- secondary.dom

Note, that the shown email-address `hostmaster@example.com` is a placeholder by openstack, any given value will be ignored for secondary zones.

Attention: Because more than one master ip address can be specified, the list must either be terminated with a double dash or the whole parameter with its list be moved to the end of the command line.
Attention: Because more than one master IP address can be specified, the list must either be terminated with a double dash or the whole parameter with its list be moved to the end of the command line.

```shell
openstack zone create secondary.domain.example. --type SECONDARY --masters 123.45.67.89
Expand All @@ -111,7 +111,7 @@ openstack zone create secondary.domain.example. --type SECONDARY --masters 123.4

#### Have the zone delegated to the SysEleven Stack nameservers

The delegation of a zone will be done by the appropriate registry for the toplevel domain where the registered domain belongs to. Most likely it will be triggered via your registrar or reseller. They need to know the nameservers that the domain shall be delegated to. You can obtain that list with the following command
The delegation of a zone will be done by the appropriate registry for the top level domain where the registered domain belongs to. Most likely it will be triggered via your registrar or reseller. They need to know the nameservers that the domain shall be delegated to. You can obtain that list with the following command

```shell
$ openstack recordset list domain.example. --type ns
Expand Down Expand Up @@ -263,7 +263,7 @@ $ openstack zone delete domain.example.de.
| Problem | Solution |
|---|---|
| Duplicate Zone| Zone has already been created, either by you or by another user. See [collisions](#collisions). |
| Invalid TLD | Zone names must be within a known toplevel domain. Contact us if you believe the top level domain is valid. |
| Invalid TLD | Zone names must be within a known top level domain. Contact us if you believe the top level domain is valid. |
| More than one label is required | It is not allowed to create a zone for a top level domain. |
| Zone name cannot be the same as a TLD | It is not allowed to create a zone for a known top level domain. |
| u'domain.example' is not a 'domainname'| Domain names must be fully qualified, i.e. end with a dot. |
Expand Down
4 changes: 2 additions & 2 deletions user/pages/02.Tutorials/10.cinder-multiattach/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Linux app-instance-1 5.15.0-25-generic #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC 2
apt install ocfs2-tools linux-modules-extra-5.15.0-25-generic
```

OCFS2 is configurend in /etc/default/o2cb and /etc/ocfs/cluster.conf.
OCFS2 is configured in /etc/default/o2cb and /etc/ocfs/cluster.conf.
Set `O2CB_ENABLED=true` in /etc/default/o2cb. All other settings can be left unchanged.

```shell
Expand Down Expand Up @@ -164,4 +164,4 @@ Now it is possible to read and write files on the volume from all VMs.
## References

* [SysEleven Stack block storage reference guide](../../04.Reference/04.block-storage/docs.en.md)
* [OCFS2 best practices guide](http://www.oracle.com/us/technologies/linux/ocfs2-best-practices-2133130.pdf)
* [OCFS2 best practices guide](https://www.oracle.com/us/technologies/linux/ocfs2-best-practices-2133130.pdf)
20 changes: 10 additions & 10 deletions user/pages/02.Tutorials/11.object-storage-acls/docs.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,19 @@ taxonomy:

### Overview

In this tutorial we will use the [Object Storage](../../04.Reference/05.object-storage/docs.en.md) to create buckets and objects with and limit the access to these by applying ACLs. We will be using the [s3cmd](http://s3tools.org/s3cmd) S3 client and the python library [boto3](https://boto3.readthedocs.io) to manage our resources.
In this tutorial we will use the [Object Storage](../../04.Reference/05.object-storage/docs.en.md) to create buckets and objects with and limit the access to these by applying ACLs. We will be using the [s3cmd](https://s3tools.org/s3cmd) S3 client and the python library [boto3](https://boto3.readthedocs.io) to manage our resources.

!! **A word of caution**
!! If possible stick to canned ACLs. We want to discourge the usage of custom ACLs.
!! Due to the implementation and the unituitive way of setting ACLs, we see a huge potential of misconfiguration.
!! If possible stick to canned ACLs. We want to discourage the usage of custom ACLs.
!! Due to the implementation and the unintuitive way of setting ACLs, we see a huge potential of misconfiguration.
!! If you insist on using custom ACLs, please try to confirm they are working as intended.

### Prerequisites

* You know the basics of using the [OpenStack CLI-Tools](../../03.Howtos/02.openstack-cli/docs.en.md).
* Environment variables are set, like shown in the [API-Access-Tutorial](../../02.Tutorials/02.api-access/docs.en.md).
* You have created EC2 credentials for your OpenStack user to be able to use the [Object Storage](../../04.Reference/05.object-storage/docs.en.md).
* You have installed [s3cmd](http://s3tools.org/s3cmd)
* You have installed [s3cmd](https://s3tools.org/s3cmd)
* You have installed python and the [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) library

We suggest you use the python library boto3 to reproduce all scenarios shown in this tutorial. Using only s3cmd will leave buckets open for group members. We are in contact with the software manufacturer of our object storage about that.
Expand All @@ -45,16 +45,16 @@ check_ssl_hostname = False

#host_base = s3.cbk.cloud.syseleven.net
#host_bucket = %(bucket).s3.cbk.cloud.syseleven.net
#website_endpoint = http://%(bucket)s.s3.cbk.cloud.syseleven.net/%(location)s/
#website_endpoint = http://s3.cbk.cloud.syseleven.net/%(bucket)s/%(location)s/
#website_endpoint = https://%(bucket)s.s3.cbk.cloud.syseleven.net/%(location)s/
#website_endpoint = https://s3.cbk.cloud.syseleven.net/%(bucket)s/%(location)s/
host_base = s3.dbl.cloud.syseleven.net
host_bucket = %(bucket).s3.dbl.cloud.syseleven.net
#website_endpoint = http://%(bucket)s.s3.dbl.cloud.syseleven.net/%(location)s/
website_endpoint = http://s3.dbl.cloud.syseleven.net/%(bucket)s/%(location)s/
#website_endpoint = https://%(bucket)s.s3.dbl.cloud.syseleven.net/%(location)s/
website_endpoint = https://s3.dbl.cloud.syseleven.net/%(bucket)s/%(location)s/
#host_base = s3.fes.cloud.syseleven.net
#host_bucket = %(bucket).s3.fes.cloud.syseleven.net
#website_endpoint = http://s3.fes.cloud.syseleven.net/%(bucket)s/%(location)s/
#website_endpoint = http://%(bucket)s.s3.fes.cloud.syseleven.net/%(location)s/
#website_endpoint = https://s3.fes.cloud.syseleven.net/%(bucket)s/%(location)s/
#website_endpoint = https://%(bucket)s.s3.fes.cloud.syseleven.net/%(location)s/
```

We can configure an s3 client with the boto3 library using following python snippet (example is in DBL region):
Expand Down
Loading

0 comments on commit 656c192

Please sign in to comment.