Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs][core] update quick starts with new dataset #11653

Merged
merged 15 commits into from
Mar 29, 2022
130 changes: 65 additions & 65 deletions docs/content/latest/admin/yb-admin.md

Large diffs are not rendered by default.

33 changes: 14 additions & 19 deletions docs/content/latest/manage/upgrade-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,47 +79,42 @@ cd /home/yugabyte/softwareyb-$VER/

- Pause ~60 seconds before upgrading the next yb-tserver.

## Upgrade YSQL system catalog
## Upgrade the YSQL system catalog

Similar to PostgreSQL, YugabyteDB YSQL stores the system metadata, (also referred to as system catalog) which includes information about tables, columns, functions, users, and so on in special tables separately, for each database in the cluster.
Similar to PostgreSQL, YugabyteDB stores YSQL system metadata, referred to as the YSQL system catalog, in special tables. The metadata includes information about tables, columns, functions, users, and so on. The tables are stored separately, one for each database in the cluster.

YSQL system catalog comes as an additional layer to store metadata on top of YugabyteDB software itself. It is accessible through YSQL API and is crucial for the YSQL functionality.
YSQL system catalog upgrades are not required for clusters where YSQL is not enabled. Learn more about configuring [YSQL flags](../../reference/configuration/yb-tserver/#ysql-flags).
When new features are added to YugabyteDB, objects such as new tables and functions need to be added to the system catalog. When you create a new cluster using the latest release, it is initialized with the most recent pre-packaged YSQL system catalog snapshot.

{{< note title="Note" >}}
YSQL system catalog upgrades are applicable for clusters with YugabyteDB version 2.8 or higher.
{{< /note >}}

### Why upgrade YSQL system catalog
However, the YugabyteDB upgrade process only upgrades binaries, and doesn't affect the YSQL system catalog of an existing cluster - it remains in the same state as before the upgrade. To derive the benefits of the latest YSQL features when upgrading, you need to manually upgrade the YSQL system catalog.

With the addition of new features , there's a need to add more objects such as new tables and functions to the YSQL system catalog.
The YSQL system catalog is accessible through the YSQL API and is required for YSQL functionality. YSQL system catalog upgrades are not required for clusters where [YSQL is not enabled](../../reference/configuration/yb-tserver/#ysql-flags).

The usual YugabyteDB upgrade process involves only upgrading binaries, and it doesn't affect YSQL system catalog of an existing cluster; it remains in the same state as it was before the upgrade.

While a newly created cluster on the latest release is initialized with the most recent pre-packaged YSQL system catalog snapshot, an older cluster might want to manually upgrade YSQL system catalog to the latest state instead, thus getting all the benefits of the latest YSQL features.
{{< note title="Note" >}}
YSQL system catalog upgrades apply to clusters with YugabyteDB version 2.8 or higher.
{{< /note >}}

### How to upgrade YSQL system catalog
### How to upgrade the YSQL system catalog

After the YugabyteDB upgrade process completes successfully, use the [yb-admin](../../admin/yb-admin/) utility to perform an upgrade of the YSQL system catalog(YSQL upgrade) as follows:
After completing the YugabyteDB upgrade process, use the [yb-admin](../../admin/yb-admin/) utility to upgrade the YSQL system catalog as follows:

```sh
$ ./bin/yb-admin upgrade_ysql
```

For a successful YSQL upgrade, a message will be displayed as follows:
For a successful YSQL upgrade, you will see the following output:

```output
YSQL successfully upgraded to the latest version
```

In certain scenarios, a YSQL upgrade can take longer than 60 seconds, which is the default timeout value for `yb-admin`. To account for that, run the command with a higher timeout value:
In certain scenarios, a YSQL upgrade can take longer than 60 seconds, the default timeout value for `yb-admin`. If this happens, run the command with a higher timeout value:

```sh
$ ./bin/yb-admin -timeout_ms 180000 upgrade_ysql
```

Running the above command is an online operation and doesn't require stopping a running cluster. This command is idempotent and can be run multiple times without any side effects.
Upgrading the YSQL system catalog is an online operation and doesn't require stopping a running cluster. `upgrade_ysql` is idempotent and can be run multiple times without any side effects.

{{< note title="Note" >}}
Concurrent operations in a cluster can lead to various transactional conflicts, catalog version mismatches, and read restart errors. This is expected, and should be addressed by rerunning the upgrade command.
Concurrent operations in a cluster can lead to various transactional conflicts, catalog version mismatches, and read restart errors. This is expected, and should be addressed by re-running `upgrade_ysql`.
{{< /note >}}
2 changes: 1 addition & 1 deletion docs/content/latest/quick-start/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The local cluster setup on a single host is intended for development and learnin

{{< /note >}}

## Get started yourself
## Get started

<div class="row">
<div class="col-12 col-md-6 col-lg-12 col-xl-6">
Expand Down
19 changes: 9 additions & 10 deletions docs/content/latest/quick-start/create-local-cluster/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ showAsideToc: true

</ul>

## 1. Create a local cluster
## Create a local cluster

To create a 1-node cluster with a replication factor (RF) of 1, run the command below.
To create a 1-node cluster with a replication factor (RF) of 1, run the following command.

```sh
$ docker run -d --name yugabyte -p7000:7000 -p9000:9000 -p5433:5433 -p9042:9042\
Expand All @@ -58,6 +58,7 @@ $ docker run -d --name yugabyte -p7000:7000 -p9000:9000 -p5433:5433 -p9042:9042
```

In the preceding `docker run` command, the data stored in YugabyteDB doesn't persist across container restarts. To make YugabyteDB persist data across restarts, add a volume mount option to the docker run command.

First, create a `~/yb_data` directory:

```sh
Expand All @@ -76,7 +77,7 @@ $ docker run -d --name yugabyte \

Clients can now connect to the YSQL and YCQL APIs at `localhost:5433` and `localhost:9042` respectively.

## 2. Check cluster status
## Check cluster status

```sh
$ docker ps
Expand All @@ -87,28 +88,26 @@ CONTAINER ID IMAGE COMMAND CREATED
5088ca718f70 yugabytedb/yugabyte "bin/yugabyted start…" 46 seconds ago Up 44 seconds 0.0.0.0:5433->5433/tcp, 6379/tcp, 7100/tcp, 0.0.0.0:7000->7000/tcp, 0.0.0.0:9000->9000/tcp, 7200/tcp, 9100/tcp, 10100/tcp, 11000/tcp, 0.0.0.0:9042->9042/tcp, 12000/tcp yugabyte
```

## 3. Check cluster status with Admin UI
## Check cluster status with Admin UI

Under the hood, the cluster you have just created consists of two processes: [YB-Master](../../../architecture/concepts/yb-master/) which keeps track of various metadata (list of tables, users, roles, permissions, and so on), and [YB-TServer](../../../architecture/concepts/yb-tserver/) which is responsible for the actual end user requests for data updates and queries.

Each of the processes exposes its own Admin UI that can be used to check the status of the corresponding process, and perform certain administrative operations. The [yb-master Admin UI](../../../reference/configuration/yb-master/#admin-ui) is available at <http://localhost:7000> and the [yb-tserver Admin UI](../../../reference/configuration/yb-tserver/#admin-ui) is available at <http://localhost:9000>. To avoid port conflicts, you should make sure other processes on your machine do not have these ports mapped to `localhost`.

### Overview and YB-Master status

The yb-master home page shows that you have a cluster (or universe) with `Replication Factor` of 1 and `Num Nodes (TServers)` as 1. The `Num User Tables` is `0` since there are no user tables created yet. YugabyteDB version number is also shown for your reference.
The YB-Master home page shows that you have a cluster (or universe) with a replication factor of 1, a single node, and no tables. The YugabyteDB version is also displayed.

![master-home](/images/admin/master-home-docker-rf1.png)

The Masters section highlights the cloud, region and zone placement for the yb-master servers.
The **Masters** section highlights the 1 YB-Master along with its corresponding cloud, region, and zone placement.

### YB-TServer status

Clicking on the `See all nodes` takes us to the Tablet Servers page where you can observe the 1 tserver along with the time since it last connected to this master via regular heartbeats.
Click **See all nodes** to go to the **Tablet Servers** page, which lists the YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats.

![master-home](/images/admin/master-tservers-list-docker-rf1.png)

{{<tip title="Next step" >}}
## Next step

[Explore YSQL](../../explore/ysql/)

{{< /tip >}}
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ showAsideToc: true

</ul>

## 1. Create a local cluster
## Create a local cluster

Create a YugabyteDB cluster in Minikube using the commands below. Note that for Helm, you have to first create a namespace.

Expand All @@ -68,7 +68,7 @@ resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,\
replicas.master=1,replicas.tserver=1,enableLoadBalancer=False --namespace yb-demo
```

## 2. Check cluster status with kubectl
## Check cluster status with kubectl

Run the following command to see that you now have two services with one pod each — 1 yb-master pod (`yb-master-0`) and 1 yb-tserver pod (`yb-tserver-0`) running. For details on the roles of these pods in a YugabyteDB cluster (aka Universe), see [Universe](../../../architecture/concepts/universe/) in the Concepts section.

Expand Down Expand Up @@ -104,7 +104,7 @@ yb-tserver-service LoadBalancer 10.106.5.69 <pending> 6379:31320/TCP,
yb-tservers ClusterIP None <none> 7100/TCP,9000/TCP,6379/TCP,9042/TCP,5433/TCP 119s
```

## 3. Check cluster status with Admin UI
## Check cluster status with Admin UI

Under the hood, the cluster you have just created consists of two processes: [YB-Master](../../../architecture/concepts/yb-master/) which keeps track of various metadata (list of tables, users, roles, permissions, and so on), and [YB-TServer](../../../architecture/concepts/yb-tserver/) which is responsible for the actual end user requests for data updates and queries.

Expand All @@ -120,20 +120,18 @@ Now, you can view the [yb-master-0 Admin UI](../../../reference/configuration/yb

### Overview and YB-Master status

The `yb-master-0` home page shows that you have a cluster with **Replication Factor** of 1 and **Num Nodes (TServers)** as `1`. The **Num User Tables** is `0` because there are no user tables created yet. The YugabyteDB version is also displayed for your reference.
The YB-Master home page shows that you have a cluster (or universe) with a replication factor of 1, a single node, and no tables. The YugabyteDB version is also displayed.

![master-home](/images/admin/master-home-kubernetes-rf1.png)

The **Masters** section highlights the YB-Master service along its corresponding cloud, region and zone placement information.
The **Masters** section highlights the 1 YB-Master along with its corresponding cloud, region, and zone placement.

### YB-TServer status

Click **See all nodes** to go to the **Tablet Servers** page where you can observe the one YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats. As new tables get added, new tablets will get automatically created and distributed evenly across all the available YB-TServers.
Click **See all nodes** to go to the **Tablet Servers** page, which lists the YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats.

![tserver-list](/images/admin/master-tservers-list-kubernetes-rf1.png)

{{<tip title="Next step" >}}
## Next step

[Explore YSQL](../../explore/ysql/)

{{< /tip >}}
22 changes: 13 additions & 9 deletions docs/content/latest/quick-start/create-local-cluster/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ showAsideToc: true

</ul>

## 1. Create a local cluster
## Create a local cluster

To create a single-node local cluster with a replication factor (RF) of 1, run the following command.

Expand All @@ -56,7 +56,13 @@ $ ./bin/yugabyted start

After the cluster is created, clients can connect to the YSQL and YCQL APIs at `localhost:5433` and `localhost:9042` respectively. You can also check `~/var/data` to see the data directory and `~/var/logs` to see the logs directory.

## 2. Check cluster status
{{< tip title="Tip" >}}

If you have previously installed YugabyteDB (2.8 or later) and created a cluster on the same computer, you may need to [upgrade the YSQL system catalog](../../../manage/upgrade-deployment/#upgrade-the-ysql-system-catalog) to run the latest features.

{{< /tip >}}

## Check cluster status

```sh
$ ./bin/yugabyted status
Expand All @@ -77,28 +83,26 @@ $ ./bin/yugabyted status
+--------------------------------------------------------------------------------------------------+
```

## 3. Check cluster status with Admin UI
## Check cluster status with Admin UI

Under the hood, the cluster you have just created consists of two processes: [YB-Master](../../../architecture/concepts/yb-master/) which keeps track of various metadata (list of tables, users, roles, permissions, and so on), and [YB-TServer](../../../architecture/concepts/yb-tserver/) which is responsible for the actual end user requests for data updates and queries.

Each of the processes exposes its own Admin UI that can be used to check the status of the corresponding process, and perform certain administrative operations. The [YB-Master Admin UI](../../../reference/configuration/yb-master/#admin-ui) is available at <http://127.0.0.1:7000> and the [YB-TServer Admin UI](../../../reference/configuration/yb-tserver/#admin-ui) is available at <http://127.0.0.1:9000>.

### Overview and YB-Master status

The yb-master Admin UI home page shows that you have a cluster with `Replication Factor` of 1 and `Num Nodes (TServers)` as 1. `Num User Tables` is 0 since there are no user tables created yet. The YugabyteDB version number is also shown for your reference.
The YB-Master home page shows that you have a cluster (or universe) with a replication factor of 1, a single node, and no tables. The YugabyteDB version is also displayed.

![master-home](/images/admin/master-home-binary-rf1.png)

The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.
The **Masters** section highlights the 1 YB-Master along with its corresponding cloud, region, and zone placement.

### YB-TServer status

Clicking `See all nodes` takes you to the Tablet Servers page where you can observe the 1 yb-tserver along with the time since it last connected to this yb-master via regular heartbeats. Since there are no user tables created yet, you can see that the `Load (Num Tablets)` is 0. As new tables get added, new tablets (aka shards) will be created automatically and distributed evenly across all the available tablet servers.
Click **See all nodes** to go to the **Tablet Servers** page, which lists the YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats. Because there are no user tables, User Tablet-Peers / Leaders is 0. As tables are added, new tablets (aka shards) will be created automatically and distributed evenly across all the available tablet servers.

![master-home](/images/admin/master-tservers-list-binary-rf1.png)

{{<tip title="Next step" >}}
## Next step

[Explore YSQL](../../explore/ysql/)

{{< /tip >}}
24 changes: 14 additions & 10 deletions docs/content/latest/quick-start/create-local-cluster/macos.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Create a local cluster on macOS in less than five minutes.
aliases:
- /quick-start/create-local-cluster/
- /latest/quick-start/create-local-cluster/
menu/:
menu:
latest:
parent: quick-start
name: 2. Create a local cluster
Expand Down Expand Up @@ -49,7 +49,7 @@ showAsideToc: true

</ul>

## 1. Create a local cluster
## Create a local cluster

To create a single-node local cluster with a replication factor (RF) of 1, run the following command.

Expand All @@ -69,7 +69,13 @@ $ ./bin/yugabyted start --master_webserver_port=9999

After the cluster is created, clients can connect to the YSQL and YCQL APIs at `localhost:5433` and `localhost:9042` respectively. You can also check `~/var/data` to see the data directory and `~/var/logs` to see the logs directory.

## 2. Check cluster status
{{< tip title="Tip" >}}

If you have previously installed YugabyteDB (2.8 or later) and created a cluster on the same computer, you may need to [upgrade the YSQL system catalog](../../../manage/upgrade-deployment/#upgrade-the-ysql-system-catalog) to run the latest features.

{{< /tip >}}

## Check cluster status

```sh
$ ./bin/yugabyted status
Expand All @@ -90,28 +96,26 @@ $ ./bin/yugabyted status
+--------------------------------------------------------------------------------------------------+
```

## 3. Check cluster status with Admin UI
## Check cluster status with Admin UI

Under the hood, the cluster you have just created consists of two processes: [YB-Master](../../../architecture/concepts/yb-master/) which keeps track of various metadata (list of tables, users, roles, permissions, and so on), and [YB-TServer](../../../architecture/concepts/yb-tserver/) which is responsible for the actual end user requests for data updates and queries.

Each of the processes exposes its own Admin UI that can be used to check the status of the corresponding process, and perform certain administrative operations. The [YB-Master Admin UI](../../../reference/configuration/yb-master/#admin-ui) is available at [http://127.0.0.1:7000](http://127.0.0.1:7000) (replace the port number if you've started `yugabyted` with the `--master_webserver_port` flag), and the [YB-TServer Admin UI](../../../reference/configuration/yb-tserver/#admin-ui) is available at [http://127.0.0.1:9000](http://127.0.0.1:9000).

### Overview and YB-Master status

The yb-master Admin UI home page shows that you have a cluster with `Replication Factor` of 1 and `Num Nodes (TServers)` as 1. `Num User Tables` is 0 since there are no user tables created yet. The YugabyteDB version number is also shown for your reference.
The YB-Master home page shows that you have a cluster (or universe) with a replication factor of 1, a single node, and no tables. The YugabyteDB version is also displayed.

![master-home](/images/admin/master-home-binary-rf1.png)

The Masters section highlights the 1 yb-master along with its corresponding cloud, region and zone placement.
The **Masters** section highlights the 1 YB-Master along with its corresponding cloud, region, and zone placement.

### YB-TServer status

Clicking `See all nodes` takes you to the Tablet Servers page where you can observe the 1 yb-tserver along with the time since it last connected to this yb-master via regular heartbeats. Since there are no user tables created yet, you can see that the `Load (Num Tablets)` is 0. As new tables get added, new tablets (aka shards) will be created automatically and distributed evenly across all the available tablet servers.
Click **See all nodes** to go to the **Tablet Servers** page, which lists the YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats. Because there are no user tables, User Tablet-Peers / Leaders is 0. As tables are added, new tablets (aka shards) will be created automatically and distributed evenly across all the available tablet servers.

![master-home](/images/admin/master-tservers-list-binary-rf1.png)

{{<tip title="Next step" >}}
## Next step

[Explore YSQL](../../explore/ysql/)

{{< /tip >}}
4 changes: 1 addition & 3 deletions docs/content/latest/quick-start/explore/ycql.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,8 +158,6 @@ ycqlsh> SELECT * FROM myapp.stock_market WHERE stock_symbol in ('FB', 'GOOG');
(4 rows)
```

{{<tip title="Next step" >}}
## Next step

[Build an application](../../build-apps/)

{{< /tip >}}
Loading