Skip to content

Commit

Permalink
review comments
Browse files Browse the repository at this point in the history
Signed-off-by: Paul S. Schweigert <paulschw@us.ibm.com>
  • Loading branch information
psschwei committed Apr 12, 2022
1 parent 0da0b85 commit 8d254af
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions content/en/docs/tasks/debug/_index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Debugging"
description: Debugging a cluster or a containerized application.
title: "Monitoring, Logging, and Debugging"
description: Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application.
weight: 20
---

12 changes: 6 additions & 6 deletions content/en/docs/tasks/debug/debug-cluster/debug-cluster.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
reviewers:
- davidopp
title: Troubleshoot Clusters
title: Troubleshooting Clusters
description: Debugging common cluster issues.
content_type: concept
weight: 10
Expand All @@ -20,7 +20,7 @@ You may also visit [troubleshooting document](/docs/tasks/debug-application-clus

The first thing to debug in your cluster is if your nodes are all registered correctly.

Run
Run the following command:

```shell
kubectl get nodes
Expand Down Expand Up @@ -220,7 +220,7 @@ status:
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)

### Master
### Control Plane nodes

* `/var/log/kube-apiserver.log` - API Server, responsible for serving the API
* `/var/log/kube-scheduler.log` - Scheduler, responsible for making scheduling decisions
Expand All @@ -235,15 +235,15 @@ of the relevant log files. (note that on systemd-based systems, you may need to

This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.

### Root causes:
### Root causes

- VM(s) shutdown
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
- Operator error, for example misconfigured Kubernetes software or application software

### Specific scenarios:
### Specific scenarios

- Apiserver VM shutdown or apiserver crashing
- Results
Expand Down Expand Up @@ -277,7 +277,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
- users unable to read API
- etc.

### Mitigations:
### Mitigations

- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
Expand Down

0 comments on commit 8d254af

Please sign in to comment.