Skip to content

Commit

Permalink
Fix links
Browse files Browse the repository at this point in the history
Signed-off-by: ChrisChinchilla <chris@chronosphere.io>
  • Loading branch information
ChrisChinchilla committed Sep 16, 2020
1 parent 0d8b1fa commit 3325ed8
Show file tree
Hide file tree
Showing 27 changed files with 76 additions and 95 deletions.
5 changes: 1 addition & 4 deletions docs/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ offlineSearch = false
# Order sections in menu by "weight" or "title". Default to "weight"
ordersectionsby = "weight"
# Change default color scheme with a variant one. Can be "red", "blue", "green".
themeVariant = ""
themeVariant = "blue"
twitter = "m3db_io"

# TODO: Do not like doing this really
Expand Down Expand Up @@ -111,9 +111,6 @@ identifier = "ds"
url = "https://github.com/m3db/m3"
weight = 10

[permalinks]
posts = "/:sections/:slug.md"

[outputs]
home = [ "HTML", "RSS", "JSON"]
page = [ "HTML"]
Expand Down
14 changes: 7 additions & 7 deletions docs/content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,21 @@ other supporting infrastructure.
M3 has several features, provided as discrete components, which make it an ideal platform for time series data at scale:

- A distributed time series database, [M3DB](m3db/), that provides scalable storage for time series data and a reverse index.
- A sidecar process, [M3Coordinator](integrations/prometheus.md), that allows M3DB to act as the long-term storage for Prometheus.
- A distributed query engine, [M3Query](query_engine/index.md), with native support for PromQL and Graphite (M3QL coming soon).
- A sidecar process, [M3Coordinator](integrations/prometheus), that allows M3DB to act as the long-term storage for Prometheus.
- A distributed query engine, [M3Query](query_engine), with native support for PromQL and Graphite (M3QL coming soon).
<!-- Add M3Aggregator link -->
- An aggregation tier, M3Aggregator, that runs as a dedicated metrics aggregator/downsampler allowing metrics to be stored at various retentions at different resolutions.

## Getting Started

**Note:** Make sure to read our [Operational Guides](operational_guide/index.md) before running in production!
**Note:** Make sure to read our [Operational Guides](operational_guide) before running in production!

Getting started with M3 is as easy as following one of the How-To guides.

- [Single M3DB node deployment](how_to/single_node.md)
- [Clustered M3DB deployment](how_to/cluster_hard_way.md)
- [M3DB on Kubernetes](how_to/kubernetes.md)
- [Isolated M3Query on deployment](how_to/query.md)
- [Single M3DB node deployment](how_to/single_node)
- [Clustered M3DB deployment](how_to/cluster_hard_way)
- [M3DB on Kubernetes](how_to/kubernetes)
- [Isolated M3Query on deployment](how_to/query)

## Support

Expand Down
8 changes: 4 additions & 4 deletions docs/content/faqs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Yes, you can definitely do that. It's all just about setting the etcd endpoints
Yes, you can use the [Prometheus remote write client](https://github.com/m3db/prometheus_remote_client_golang/).

- **Why does my dbnode keep OOM’ing?**
Refer to the [troubleshooting guide](../troubleshooting/index.md).
Refer to the [troubleshooting guide](../troubleshooting).

- **Do you support PromQL?**
Yes, M3Query and M3Coordinator both support PromQL.
Expand All @@ -33,7 +33,7 @@ If you’re adding namespaces, the m3dbnode process will pickup the new namespac
If you’re removing or modifying an existing namespace, you’ll need to restart the m3dbnode process in order to complete the namespace deletion/modification process. It is recommended to restart one node at a time and wait for a node to be completely bootstrapped before restarting another node.

- **How do I set up aggregation in the coordinator?**
Refer to the [Aggregation section](../how_to/query.md) of the M3Query how-to guide.
Refer to the [Aggregation section](../how_to/query) of the M3Query how-to guide.

- **How do I set up aggregation using a separate aggregation tier?**
See this [WIP documentation](https://github.com/m3db/m3/pull/1741/files#diff-0a1009f86783ca8fd4499418e556c6f5).
Expand Down Expand Up @@ -65,7 +65,7 @@ etcdClusters:
```

- **How can I get a heap dump, cpu profile, etc.**
See our docs on the [/debug/dump api](../troubleshooting/index.md)
See our docs on the [/debug/dump api](../troubleshooting)

- **How much memory utilization should I run M3DB at?**
We recommend not going above 50%.
Expand All @@ -74,7 +74,7 @@ We recommend not going above 50%.
TBA

- **What is the recommended way to create a new namespace?**
Refer to the [Namespace configuration guide](../operational_guide/namespace_configuration.md).
Refer to the [Namespace configuration guide](../operational_guide/namespace_configuration).

- **How can I see the cardinality of my metrics?**
Currently, the best way is to go to the [M3DB Node Details Dashboard](https://grafana.com/grafana/dashboards/8126) and look at the `Ticking` panel. However, this is not entirely accurate because of the way data is stored in M3DB -- time series are stored inside time-based blocks that you configure. In actuality, the `Ticking` graph shows you how many unique series there are for the most recent block that has persisted. In the future, we plan to introduce easier ways to determine the number of unique time series.
2 changes: 1 addition & 1 deletion docs/content/how_to/aggregator.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Similar to M3DB, `m3aggregator` supports clustering and replication by default.

## Configuration

Before setting up m3aggregator, make sure that you have at least [one M3DB node running](single_node.md) and a dedicated m3coordinator setup.
Before setting up m3aggregator, make sure that you have at least [one M3DB node running](single_node) and a dedicated m3coordinator setup.

We highly recommend running with at least a replication factor 2 for a `m3aggregator` deployment. If you run with replication factor 1 then when you restart an aggregator it will temporarily interrupt good the stream of aggregated metrics and there will be some data loss.

Expand Down
12 changes: 6 additions & 6 deletions docs/content/how_to/cluster_hard_way.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ M3DB_HOST_ID=m3db001 m3dbnode -f config.yml

### Kernel

Ensure you review our [recommended kernel configuration](../operational_guide/kernel_configuration.md) before running M3DB in production as M3DB may exceed the default limits for some default kernel values.
Ensure you review our [recommended kernel configuration](../operational_guide/kernel_configuration) before running M3DB in production as M3DB may exceed the default limits for some default kernel values.

## Config files

Expand Down Expand Up @@ -107,8 +107,8 @@ m3dbnode -f <config-name.yml>

The recommended way to create a namespace and initialize a topology is to use the `/api/v1/database/create` api. Below is an example.

**Note:** In order to create a more custom setup, please refer to the [namespace configuration](../operational_guide/namespace_configuration.md) and
[placement configuration](../operational_guide/placement_configuration.md) guides, though this is discouraged.
**Note:** In order to create a more custom setup, please refer to the [namespace configuration](../operational_guide/namespace_configuration) and
[placement configuration](../operational_guide/placement_configuration) guides, though this is discouraged.

```shell
curl -X POST http://localhost:7201/api/v1/database/create -d '{
Expand Down Expand Up @@ -167,11 +167,11 @@ If you need to setup multiple namespaces, you can run the above `/api/v1/databas

### Replication factor (RF)

Recommended is RF3, where each replica is spread across failure domains such as a rack, data center or availability zone. See [Replication Factor Recommendations](../operational_guide/replication_and_deployment_in_zones.md) for more specifics.
Recommended is RF3, where each replica is spread across failure domains such as a rack, data center or availability zone. See [Replication Factor Recommendations](../operational_guide/replication_and_deployment_in_zones) for more specifics.

### Shards

See [placement configuration](../operational_guide/placement_configuration.md) to determine the appropriate number of shards to specify.
See [placement configuration](../operational_guide/placement_configuration) to determine the appropriate number of shards to specify.

## Test it out

Expand Down Expand Up @@ -216,4 +216,4 @@ curl -sS -X POST http://localhost:9003/query -d '{

## Integrations

[Prometheus as a long term storage remote read/write endpoint](../integrations/prometheus.md).
[Prometheus as a long term storage remote read/write endpoint](../integrations/prometheus).
2 changes: 1 addition & 1 deletion docs/content/how_to/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ curl -sSf -X POST localhost:7201/api/v1/placement -d '{
### Prometheus
As mentioned in our integrations [guide](../integrations/prometheus.md), M3DB can be used as a [remote read/write
As mentioned in our integrations [guide](../integrations/prometheus), M3DB can be used as a [remote read/write
endpoint](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_write%3E) for Prometheus.
If you run Prometheus on your Kubernetes cluster you can easily point it at M3DB in your Prometheus server config:
Expand Down
8 changes: 4 additions & 4 deletions docs/content/how_to/query.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ weight: 4
---


m3query is used to query data that is stored in M3DB. For instance, if you are using the Prometheus remote write endpoint with [m3coordinator](../integrations/prometheus.md), you can use m3query instead of the Prometheus remote read endpoint. By doing so, you get all of the benefits of m3query's engine such as [block processing](http://m3db.github.io/m3/query_engine/architecture/blocks/). Furthermore, since m3query provides a Prometheus compatible API, you can use 3rd party graphing and alerting solutions like Grafana.
m3query is used to query data that is stored in M3DB. For instance, if you are using the Prometheus remote write endpoint with [m3coordinator](../integrations/prometheus), you can use m3query instead of the Prometheus remote read endpoint. By doing so, you get all of the benefits of m3query's engine such as [block processing](http://m3db.github.io/m3/query_engine/architecture/blocks/). Furthermore, since m3query provides a Prometheus compatible API, you can use 3rd party graphing and alerting solutions like Grafana.

## Configuration

Before setting up m3query, make sure that you have at least [one M3DB node running](single_node.md). In order to start m3query, you need to configure a `yaml` file, that will be used to connect to M3DB. Here is a link to a [sample config](https://github.com/m3db/m3/blob/master/src/query/config/m3query-local-etcd.yml) file that is used for an embedded etcd cluster within M3DB.
Before setting up m3query, make sure that you have at least [one M3DB node running](single_node). In order to start m3query, you need to configure a `yaml` file, that will be used to connect to M3DB. Here is a link to a [sample config](https://github.com/m3db/m3/blob/master/src/query/config/m3query-local-etcd.yml) file that is used for an embedded etcd cluster within M3DB.

### Running

Expand All @@ -24,11 +24,11 @@ Or you can run it with Docker using the Docker file located at `$GOPATH/src/gith

### Namespaces

All namespaces that you wish to query from must be configured when [setting up M3DB](single_node.md). If you wish to add or change an existing namespace, please follow the namespace operational guide [here](../operational_guide/namespace_configuration.md).
All namespaces that you wish to query from must be configured when [setting up M3DB](single_node). If you wish to add or change an existing namespace, please follow the namespace operational guide [here](../operational_guide/namespace_configuration).

### etcd

The configuration file linked above uses an embedded etcd cluster, which is fine for development purposes. However, if you wish to use this in production, you will want an [external etcd](../operational_guide/etcd.md) cluster.
The configuration file linked above uses an embedded etcd cluster, which is fine for development purposes. However, if you wish to use this in production, you will want an [external etcd](../operational_guide/etcd) cluster.

<!-- TODO: link to etcd operational guide -->

Expand Down
10 changes: 5 additions & 5 deletions docs/content/how_to/single_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ docker pull quay.io/m3db/m3dbnode:latest
docker run -p 7201:7201 -p 7203:7203 -p 9003:9003 --name m3db -v $(pwd)/m3db_data:/var/lib/m3db quay.io/m3db/m3dbnode:latest
```

**Note:** For the single node case, we use this [sample config file](https://github.com/m3db/m3/blob/master/src/dbnode/config/m3dbnode-local-etcd.yml). If you inspect the file, you'll see that all the configuration is grouped by `coordinator` or `db`. That's because this setup runs `M3DB` and `M3Coordinator` as one application. While this is convenient for testing and development, you'll want to run clustered `M3DB` with a separate `M3Coordinator` in production. You can read more about that [here.](cluster_hard_way.md).
**Note:** For the single node case, we use this [sample config file](https://github.com/m3db/m3/blob/master/src/dbnode/config/m3dbnode-local-etcd.yml). If you inspect the file, you'll see that all the configuration is grouped by `coordinator` or `db`. That's because this setup runs `M3DB` and `M3Coordinator` as one application. While this is convenient for testing and development, you'll want to run clustered `M3DB` with a separate `M3Coordinator` in production. You can read more about that [here.](cluster_hard_way).

Next, create an initial namespace for your metrics in the database using the cURL below. Keep in mind that the provided `namespaceName` must match the namespace in the `local` section of the `M3Coordinator` YAML configuration, and if you choose to [add any additional namespaces](../operational_guide/namespace_configuration.md) you'll need to add them to the `local` section of `M3Coordinator`'s YAML config as well.
Next, create an initial namespace for your metrics in the database using the cURL below. Keep in mind that the provided `namespaceName` must match the namespace in the `local` section of the `M3Coordinator` YAML configuration, and if you choose to [add any additional namespaces](../operational_guide/namespace_configuration) you'll need to add them to the `local` section of `M3Coordinator`'s YAML config as well.

<!-- TODO: Retention actually different -->

Expand All @@ -35,7 +35,7 @@ curl -X POST http://localhost:7201/api/v1/database/create -d '{
}'
```

**Note**: The `api/v1/database/create` endpoint is abstraction over two concepts in M3DB called [placements](../operational_guide/placement.md) and [namespaces](../operational_guide/namespace_configuration.md). If a placement doesn't exist, it will create one based on the `type` argument, otherwise if the placement already exists, it just creates the specified namespace. For now it's enough to just understand that it creates M3DB namespaces (tables), but if you're going to run a clustered M3 setup in production, make sure you familiarize yourself with the links above.
**Note**: The `api/v1/database/create` endpoint is abstraction over two concepts in M3DB called [placements](../operational_guide/placement) and [namespaces](../operational_guide/namespace_configuration). If a placement doesn't exist, it will create one based on the `type` argument, otherwise if the placement already exists, it just creates the specified namespace. For now it's enough to just understand that it creates M3DB namespaces (tables), but if you're going to run a clustered M3 setup in production, make sure you familiarize yourself with the links above.

Placement initialization may take a minute or two and you can check on the status of this by running the following:

Expand Down Expand Up @@ -92,7 +92,7 @@ curl -sS -X POST http://localhost:9003/writetagged -d '{

**Note:** In the above example we include the tag `__name__`. This is because `__name__` is a
reserved tag in Prometheus and will make querying the metric much easier. For example, if you have
[M3Query](query.md) setup as a Prometheus datasource in Grafana, you can then query for the metric
[M3Query](query) setup as a Prometheus datasource in Grafana, you can then query for the metric
using the following PromQL query:

```shell
Expand Down Expand Up @@ -144,4 +144,4 @@ curl -sS -X POST http://localhost:9003/query -d '{
}
```

Now that you've got the M3 stack up and running, take a look at the rest of our documentation to see how you can integrate with [Prometheus](../integrations/prometheus.md) and [Graphite](../integrations/graphite.md)
Now that you've got the M3 stack up and running, take a look at the rest of our documentation to see how you can integrate with [Prometheus](../integrations/prometheus) and [Graphite](../integrations/graphite)
10 changes: 5 additions & 5 deletions docs/content/how_to/use_as_tsdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ title: Using M3DB as a general purpose time series database

## Overview

M3 has native integrations that make it particularly easy to use it as a metrics storage for [Prometheus](../integrations/prometheus.md) and [Graphite](../integrations/graphite.md). M3DB can also be used as a general purpose distributed time series database by itself.
M3 has native integrations that make it particularly easy to use it as a metrics storage for [Prometheus](../integrations/prometheus) and [Graphite](../integrations/graphite). M3DB can also be used as a general purpose distributed time series database by itself.

## Data Model

### IDs and Tags

M3DB's data model allows multiple namespaces, each of which can be [configured and tuned independently](../operational_guide/namespace_configuration.md).
M3DB's data model allows multiple namespaces, each of which can be [configured and tuned independently](../operational_guide/namespace_configuration).

Each namespace can also be configured with its own schema (see "Schema Modeling" section below).

Expand Down Expand Up @@ -62,7 +62,7 @@ message VehicleLocation {
}
```

While M3DB strives to support the entire [proto3 language spec](https://developers.google.com/protocol-buffers/docs/proto3), only [the following features are currently supported](https://github.com/m3db/m3/blob/master/src/dbnode/encoding/proto/docs/encoding.md):
While M3DB strives to support the entire [proto3 language spec](https://developers.google.com/protocol-buffers/docs/proto3), only [the following features are currently supported](https://github.com/m3db/m3/blob/master/src/dbnode/encoding/proto/docs/encoding):

1. [Scalar values](https://developers.google.com/protocol-buffers/docs/proto3#scalar)
2. Nested messages
Expand Down Expand Up @@ -108,13 +108,13 @@ message VehicleLocation {

While the latter schema is valid, the attributes field will not be compressed; users should weigh the tradeoffs between more expressive schema and better compression for each use case.

For more details on the compression scheme and its limitations, review [the documentation for M3DB's compressed Protobuf encoding](https://github.com/m3db/m3/blob/master/src/dbnode/encoding/proto/docs/encoding.md).
For more details on the compression scheme and its limitations, review [the documentation for M3DB's compressed Protobuf encoding](https://github.com/m3db/m3/blob/master/src/dbnode/encoding/proto/docs/encoding).

### Getting Started

#### M3DB setup

For more advanced setups, it's best to follow the guides on how to configure an M3DB cluster [manually](./cluster_hard_way.md) or [using Kubernetes](./kubernetes.md). However, this tutorial will walk you through configuring a single node setup locally for development.
For more advanced setups, it's best to follow the guides on how to configure an M3DB cluster [manually](./cluster_hard_way) or [using Kubernetes](./kubernetes). However, this tutorial will walk you through configuring a single node setup locally for development.

First, run the following command to pull the latest M3DB image:

Expand Down
4 changes: 2 additions & 2 deletions docs/content/integrations/grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ M3 supports a variety of Grafana integrations.

## Prometheus / Graphite Sources

M3Coordinator can function as a datasource for Prometheus as well as Graphite. See the [Prometheus integration](./prometheus.md) and [Graphite integration](./graphite.md) documents respectively for more information.
M3Coordinator can function as a datasource for Prometheus as well as Graphite. See the [Prometheus integration](./prometheus) and [Graphite integration](./graphite) documents respectively for more information.

## Pre-configured Prometheus Dashboards

All M3 applications expose Prometheus metrics on port `7203` by default as described in the [Prometheus integration guide](./prometheus.md), so if you're already monitoring your M3 stack with Prometheus and Grafana you can use our pre-configured dashboards.
All M3 applications expose Prometheus metrics on port `7203` by default as described in the [Prometheus integration guide](./prometheus), so if you're already monitoring your M3 stack with Prometheus and Grafana you can use our pre-configured dashboards.

[M3DB Prometheus / Grafana dashboard](https://grafana.com/dashboards/8126)

Expand Down
Loading

0 comments on commit 3325ed8

Please sign in to comment.