Skip to content

Commit

Permalink
Merge pull request #279 from nbalacha/docs
Browse files Browse the repository at this point in the history
chore: update the docs
  • Loading branch information
openshift-merge-robot authored Oct 14, 2022
2 parents e82a5df + dfc42d1 commit 9a4ac0a
Show file tree
Hide file tree
Showing 7 changed files with 89 additions and 98 deletions.
12 changes: 6 additions & 6 deletions doc/design/operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
# Controllers and their managed resources


- **lvmcluster-controller:** Running in the operator deployment, it will create all resources that don't require information from the node. When applicable, the health of the underlying resource is updated in the LVMCluster status and errors are also exposed as events. Overall success also passed on as an event.:
- **lvmcluster-controller:** Running in the operator deployment, it will create all resources that don't require information from the node. When applicable, the health of the underlying resource is updated in the LVMCluster status.:
- vgmanager daemonset
- lvmd daemonset
- CSIDriver CR
- CSI Driver Controller Deployment (controller is the name of the csi-component)
- CSI Driver Daemonset
- TopoLVM CSIDriver CR
- TopoLVM CSI Driver Controller Deployment (controller is the name of the csi-component)
- TopoLVM CSI Driver Node Daemonset
- needs an initContainer to block until lvmd config file is read
- **The vg-manager:** A daemonset with one instance per selected node, will create all resources that require knowledge from the node. Errors and PVs being added to a volumegroup will be passed on as events.
- volumegroups
- **The vg-manager:** A daemonset with one instance per selected node, it will create all resources that require knowledge from the node.
- volumegroups and thinpools
- lvmd config file


Expand Down
18 changes: 9 additions & 9 deletions doc/design/thin_pool.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# LVMO: Thin provisioning

## Summary
- LVM thin provisioning allows creation of volumes whose combined size is greater than that of the available storage.
- LVM thin provisioning allows the creation of volumes whose combined virtual size is greater than that of the available storage.

**Advantages**:
- Storage space can be used more effectively. More users can be accommodated for the same amount of storage space when compared to thick provisioning. This significantly reduces upfront hardware cost for the storage admins.
Expand All @@ -20,9 +20,9 @@ The LVMO will create a thin-pool LV in the volume group in order to create thinl
- The `deviceClass` API in the `LVMClusterSpec` will contain the mapping between a device-class and a thin-pool in volume group.
- One device-class will be mapped to a single thin pool.
- User should be able to configure the thin-pool size based on percentage of the available volume group size.
- Default chunk size of the thin pool will be 512 kiB
- Default chunk size of the thin pool will be 128 kiB
- `lvmd.yaml` config file should be updated with the device class, volume group and thin-pool mapping.
- Alerts should be triggered if a thin-pool `data` or `metadata` available size crosses a predefined threshold limit.
- Alerts should be triggered if the thin-pool `data` or `metadata` usage crosses a predefined threshold limit.


## Design Details
Expand All @@ -41,7 +41,7 @@ The LVMO will create a thin-pool LV in the volume group in order to create thinl
+ // SizePercent represents percentage of remaining space in the volume group that should be used
+ // for creating the thin pool.
+ // +kubebuilder:validation:default=75
+ // +kubebuilder:validation:default=90
+ // +kubebuilder:validation:Minimum=10
+ // +kubebuilder:validation:Maximum=90
+ SizePercent int `json:"sizePercent,omitempty"`
Expand Down Expand Up @@ -69,7 +69,7 @@ type DeviceClass struct {
- Following new fields will added to `DeviceClass` API
- **ThinPoolConfig** API contains information related to a thin pool.These configuration options are:
- **Name**: Name of the thin-pool
- **SizePercent**: Size of the thin pool to be created with respect to available free space in the volume group. It represents percentage value and not absolute size values. Size value should range between 10-90. It defaults to 75 if no value is provided.
- **SizePercent**: Size of the thin pool to be created with respect to available free space in the volume group. It represents percentage value and not absolute size values. Size value should range between 10-90. It defaults to 90 if no value is provided.
- **OverprovisionRatio**: The factor by which additional storage can be provisioned compared to the available storage in the thin pool.

- `LVMVolumeGroup` API changes:
Expand Down Expand Up @@ -100,7 +100,7 @@ type LVMVolumeGroupSpec struct {
```
where:
- Size is `LVMClusterSpec.Storage.DeviceClass.ThinPoolConfig.SizePercent`
- chunk size is 512KiB, which is the default.
- chunk size is 128KiB, which is the default.
- VG manager will also update the `lvmd.yaml` file to map volume group and its thin-pool to the topolvm device class.
- Sample `lvmd.yaml` config file
Expand All @@ -112,15 +112,15 @@ device-classes:
type: thin
thin-pool-config:
name: pool0
overprovision-ratio: 50.0
overprovision-ratio: 5.0
```

### Monitoring and Alerts
- Available thin-pool size (both data and metadata) should be provided by topolvm as prometheus metrics.
- Threshold limits for the thin-pool should be provide as static values in the PrometheusRule.
- If used size of data or metadata for a particular thin-pool crosses the threshold, then appropriate alerts should be triggered.
- If the data or metadata usage for a particular thin-pool crosses a threshold, appropriate alerts should be triggered.


### Open questions
- What should be the chunk size of the thin-pools?
- Use default size a 512 kiB for now.
- Use default size a 128 kiB for now.
6 changes: 3 additions & 3 deletions doc/dev-guide/lvmo-units.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

- *lvmVG* reconcile units deploys and manages LVMVolumeGroup CRs
- The LVMVG resource manager creates individual LVMVolumeGroup CRs for each
deviceClass in the LVMCluster CR. The vgmanager watches the LVMVolumeGroup
deviceClass in the LVMCluster CR. The vgmanager controller watches the LVMVolumeGroup
and creates the required volume groups on the individual nodes based on the
specified deviceSelector and nodeSelector.
- The corresponding CRs forms the basis of `vgManager` unit to create volume
groups and create lvmd config file
groups and the lvmd config file for TopoLVM.

## Openshift SCCs

Expand All @@ -19,5 +19,5 @@

- *topolvmStorageClass* resource units creates and manages all the storage
classes corresponding to the deviceClasses in the LVMCluster
- Storage Class name is generated with a prefix "topolvm-" added to name of the
- Storage Class name is generated with a prefix "odf-lvm-" added to name of the
device class in the LVMCluster CR
16 changes: 8 additions & 8 deletions doc/dev-guide/reconciler.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@
following resource units for setting up [Topolvm](topolvm-repo) CSI and all
the supporting resources to use storage local to the node via Logical Volume
Manager (lvm)
- *csiDriver*: Reconciles topolvm CSI Driver
- *topolvmController*: Reconciles topolvm controller plugin
- *csiDriver*: Reconciles TopoLVM CSI Driver
- *topolvmController*: Reconciles TopoLVM controller plugin
- *lvmVG*: Reconciles volume groups from LVMCluster CR
- *openshiftSccs*: Manages SCCs when the operator is run in Openshift
environment
- *topolvmNode*: Reconciles topolvm nodeplugin along with lvmd
- *topolvmNode*: Reconciles TopoLVM nodeplugin along with lvmd
- *vgManager*: Responsible for creation of Volume Groups
- *topolvmStorageClass*: Manages storage class life cycle based on
devicesClasses in LVMCluster CR
- The LVMO creates an LVMVolumeGroup CR for each deviceClass in the
LVMCluster CR. The LVMVolumeGroups are reconciled by the vgmanager controllers.
- In addition to managing above resource units, lvmcluster-controller collates
- In addition to managing the above resource units, lvmcluster-controller collates
the status of deviceClasses across nodes from LVMVolumeGroupNodeStatus and
updates status of LVMCluster CR
- `resourceManager` interface is defined in
Expand All @@ -28,7 +28,7 @@

Note:
- Above names refers to the struct which satisfies `resourceManager` interface
- Please refer to topolvm [design][topolvm-design] doc to know more about Topolvm
- Please refer to the topolvm [design][topolvm-design] doc to know more about TopoLVM
CSI
- Any new resource units should also implement `resourceManager` interface

Expand All @@ -38,9 +38,9 @@ Note:
created and managed across nodes with custom node selector, toleration and
device selectors
- Should be created and edited by user in operator installed namespace
- Only a single CR instance with a single volume group containing all available
disks across schedulable nodes is supported and implementation respecting
tolerations, device and node selector fields is coming soon
- Only a single CR instance with a single volume group is supported.
- The user can choose to specify the devices to be used for the volumegroup.
- All available disks will be used if no devicePaths are specified,.
- All fields in `status` are updated based on the status of volume groups
creation across nodes

Expand Down
27 changes: 14 additions & 13 deletions doc/dev-guide/topolvm-csi.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,35 @@
# Topolvm CSI
# TopoLVM CSI

- LVM Operator deploys the Topolvm CSI which provides dynamic provisioning of
- LVM Operator deploys the TopoLVM CSI plugin which provides dynamic provisioning of
local storage.
- Please refer to topolvm [docs][topolvm-docs] for more details on topolvm
- Please refer to TopoLVM [docs][topolvm-docs] for more details on topolvm

## CSI Driver

- *csiDriver* reconcile unit deploys Topolvm CSIDriver resource
- *csiDriver* reconcile unit creates the Topolvm CSIDriver resource

## Topolvm Controller
## TopoLVM Controller

- *topolvmController* reconcile unit deploys a single Topolvm Controller plugin
- *topolvmController* reconcile unit deploys a single TopoLVM Controller plugin
deployment and manages any updates to the deployment
- Topolvm scheduler is not used for pod scheduling. The CSI StorageCapacity
tracking feature by the scheduler
- An init container generates openssl certs to be used in topolvm controller
- The TopoLVM scheduler is not used for pod scheduling. The CSI StorageCapacity
tracking feature is used by the scheduler to determine the node on which
to provision storage.
- An init container generates openssl certs to be used in topolvm-controller
which will be soon replaced with cert-manager

## Topolvm Node and LVMd

- *topolvmNode* reconcile unit deploys and manages topolvm node plugin and lvmd
daemonset and scales it based on node selector specified in the devicesClasses
- *topolvmNode* reconcile unit deploys and manages the TopoLVM node plugin and lvmd
daemonset and scales it based on the node selector specified in the devicesClasses
in LVMCluster
- An init container polls for the availability of lvmd config file before
starting the lvmd and topolvm-node containers

## Deletion

- All above resources will be removed by their respective reconcile units when
LVM Cluster CR governing then is deleted
- All the resources above will be removed by their respective reconcile units when
LVMCluster CR governing then is deleted


[topolvm-docs]: https://github.com/topolvm/topolvm/tree/main/docs
12 changes: 6 additions & 6 deletions doc/dev-guide/vg-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

## Creation

- On LVMCluster CR creation `vg-manager` daemonset pods are created
- They'll be run on all the nodes which matches the Node Selector specified in
the CR, as of now it's run on all schedulable nodes
- A controller owner reference is set on the daemonset to be able to cleanup
itself when CR is deleted
- `vg-manager` daemonset pods are created by the LVMCluster controller on LVMCluster CR creation
- They run on all nodes which match the Node Selector specified in
the CR. They run on all schedulable nodes if no nodeSelector is specified.
- A controller owner reference is set on the daemonset so it is cleaned up
when the LVMCluster CR is deleted.

## Reconciliation

Expand All @@ -16,7 +16,7 @@
by the LVMO.
- The vg-manager will determine the disks that match the filters
specified (currently not implemented) on the node it is running on and create
an LVM VG with them.
an LVM VG with them. It then creates the lvmd.yaml config file for lvmd.
- vg-manager also updates LVMVolumeGroupStatus with observed status of volume
groups for the node on which it is running

Expand Down
Loading

0 comments on commit 9a4ac0a

Please sign in to comment.