Skip to content

Commit

Permalink
docs: update monitoring docs (#8633)
Browse files Browse the repository at this point in the history
(cherry picked from commit f8b0e50)
  • Loading branch information
michelle-0808 committed Dec 19, 2024
1 parent 94d1aa0 commit 8ae5a89
Show file tree
Hide file tree
Showing 13 changed files with 363 additions and 776 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -595,108 +595,6 @@ kbcli cluster logs myproxy --instance myproxy-mysql-0 -c vttablet

</Tabs>

## Monitoring

:::note

In the production environment, all monitoring Addons are disabled by default when installing KubeBlocks. You can enable these Addons but it is highly recommended to build your monitoring system or purchase a third-party monitoring service. For details, refer to [Monitoring](./../../observability/monitor-database.md).

:::

<Tabs>

<TabItem value="kubectl" label="kubectl" default>

1. Enable the monitoring Addons.

For the testing/demo environment, run the commands below to enable the monitoring Addons provided by KubeBlocks.

```bash
helm install prometheus kubeblocks/prometheus --namespace kb-system --create-namespace
helm install prometheus kubeblocks/prometheus --namespace kb-system --create-namespace
helm install prometheus kubeblocks/prometheus --namespace kb-system --create-namespace
```

For the production environment, you can integrate the monitoring components. For details, you can refer to the relevant docs provided by the monitoring tools.

2. Check whether the monitoring function of this proxy cluster is enabled.

```bash
kubectl get cluster myproxy -o yaml
```

If the output YAML file shows `disableExporter: false`, the monitoring function of this proxy cluster is enabled.

If the monitoring function is not enabled, run the command below to enable it first.

```bash
kubectl patch cluster mycluster -n demo --type "json" -p '[{"op":"add","path":"/spec/componentSpecs/0/disableExporter","value":false}]'
```

3. View the dashboard.

For the testing/demo environment, run the commands below to view the Grafana dashboard.

```bash
# 1. Get the username and password
kubectl get secret grafana -n kb-system -o jsonpath='{.data.admin-user}' |base64 -d
kubectl get secret grafana -n kb-system -o jsonpath='{.data.admin-password}' |base64 -d
# 2. Connect to the Grafana dashboard
kubectl port-forward svc/grafana -n kb-system 3000:8
# 3. Open the web browser and enter the address 127.0.0.1:3000 to visit the dashboard.
# 4. Enter the username and password obtained from step 1.
```

For the production environment, you can view the dashboard of the corresponding cluster via Grafana Web Console. For more detailed information, see [the Grafana dashboard documentation](https://grafana.com/docs/grafana/latest/dashboards/).

:::note

1. If there is no data in the dashboard, you can check whether the job is `kubeblocks-service`. Enter `kubeblocks-service` in the job field and press the enter button.

![Monitoring dashboard](./../../../img/api-monitoring.png)

2. For more details on the monitoring function, you can refer to [Monitoring](./../../observability/monitor-database.md).

:::

</TabItem>

<TabItem value="kbcli" label="kbcli">

1. Enable the monitoring function.

```bash
kbcli cluster update myproxy --disable-exporter=false
```

2. View the Addon list and enable the Grafana Addon.

```bash
kbcli addon list
kbcli addon enable grafana
```

3. View the dashboard list.

```bash
kbcli dashboard list
```

4. Open the Grafana dashboard.

```bash
kbcli dashboard open kubeblocks-grafana
```

</TabItem>

</Tabs>

## Read-write splitting

You can enable the read-write splitting function.
Expand Down Expand Up @@ -767,7 +665,7 @@ You can also set the ratio for read-write splitting and here is an example of di
kbcli cluster configure myproxy --components vtgate --set=read_write_splitting_ratio=70
```

Moreover, you can [use Grafana](#monitoring) or run `show workload` in the VTGate terminal to view the flow distribution.
Moreover, you can run `show workload` in the VTGate terminal to view the flow distribution.

```bash
show workload;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,10 +157,6 @@ Elasticsearch provides the HTTP protocol for client access on port 9200. You can
curl http://127.0.0.1:9200/_cat/nodes?v
```

## Monitor the Elasticsearch cluster

The monitoring function of Elasticsearch is the same as other engines. For details, refer to [the monitoring tutorial](./../observability/monitor-database.md).

## Scale

KubeBlocks supports horizontally and vertially scaling an Elasticsearch cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';

KubeBlocks can quickly integrate new engines through good abstraction. The functions tested in KubeBlocks include Pulsar cluster creation and deletion, vertical and horizontal scaling of Pulsar cluster components, storage expansion, restart, and configuration changes.

KubeBlocks supports Pulsar's daily operations, including basic lifecycle operations such as cluster creation, deletion, and restart, as well as advanced operations such as horizontal and vertical scaling, storage expansion, configuration changes, and monitoring.
KubeBlocks supports Pulsar's daily operations, including basic lifecycle operations such as cluster creation, deletion, and restart, as well as advanced operations such as horizontal and vertical scaling, storage expansion, and configuration changes.

## Environment Recommendation

Expand Down Expand Up @@ -129,15 +129,15 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f

2. Create a cluster.

- **Option 1**: (**Recommended**) Create pulsar cluster by `values-production.yaml` and enable monitor.
- **Option 1**: (**Recommended**) Create pulsar cluster by `values-production.yaml`.

Configuration:
- broker: 3 replicas
- bookies: 4 replicas
- zookeeper: 3 replicas

```bash
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set monitor.enabled=true --namespace=demo
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --namespace=demo
```

- **Option 2**: Create pulsar cluster with proxy.
Expand All @@ -149,7 +149,7 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f
- zookeeper: 3 replicas

```bash
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set proxy.enable=true --set monitor.enabled=true --namespace=demo
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set proxy.enable=true --namespace=demo
```

- **Option 3**: Create pulsar cluster with proxy and deploy `bookies-recovery` component.
Expand All @@ -162,7 +162,7 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f
- bookies-recovery: 3 replicas

```bash
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set proxy.enable=true --set bookiesRecovery.enable=true --set monitor.enabled=true --namespace=demo
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set proxy.enable=true --set bookiesRecovery.enable=true --namespace=demo
```

- **Option 4**: Create pulsar cluster and specify bookies and zookeeper storage parameters.
Expand All @@ -173,7 +173,7 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f
- zookeeper: 3 replicas

```bash
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set bookies.persistence.data.storageClassName=<sc name>,bookies.persistence.log.storageClassName=<sc name>,zookeeper.persistence.data.storageClassName=<sc name>,zookeeper.persistence.log.storageClassName=<sc name> --set monitor.enabled=true --namespace=demo
helm install mycluster kubeblocks/pulsar-cluster --version "x.y.z" -f values-production.yaml --set bookies.persistence.data.storageClassName=<sc name>,bookies.persistence.log.storageClassName=<sc name>,zookeeper.persistence.data.storageClassName=<sc name>,zookeeper.persistence.log.storageClassName=<sc name> --namespace=demo
```

You can specify the storage name `<sc name>`.
Expand Down
4 changes: 0 additions & 4 deletions docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,10 +214,6 @@ If your cluster is on AWS, install the AWS Load Balancer Controller first.

</Tabs>

## Monitor the database

The monitoring function of Qdrant is the same as other engines. For details, refer to [the monitoring tutorial](./../observability/monitor-database.md).

## Scale

The scaling function for Qdrant is also supported.
Expand Down
Loading

0 comments on commit 8ae5a89

Please sign in to comment.