Skip to content

Commit

Permalink
"will" removal (github#593)
Browse files Browse the repository at this point in the history
* API docs

* Cloud

* MST

* Getting Started

* How Tos

* Top level pages

* Code quick starts

* Tutorials

* Tutorials Part Deux

* Contributing
  • Loading branch information
Loquacity authored Nov 16, 2021
1 parent 56f19ca commit 7296f80
Show file tree
Hide file tree
Showing 175 changed files with 1,259 additions and 1,258 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ this, you *will* end up with merge conflicts.
```bash
git checkout latest
```
You will get a message like this:
You get a message like this:
```bash
Switched to branch 'latest'
Your branch is up to date with 'origin/latest'.
Expand All @@ -127,7 +127,7 @@ this, you *will* end up with merge conflicts.
```
1. If you are continuing work you began earlier, check out the branch that
contains your work. For new work, create a new branch. Doing this regularly as
you are working will mean you keep your local copies up to date and avoid
you are working means you keep your local copies up to date and avoid
conflicts. You should do it at least every day before you begin work, and again
whenever you switch branches.

Expand Down
14 changes: 7 additions & 7 deletions api/add_compression_policy.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# add_compression_policy() <tag type="community" content="community" />
Allows you to set a policy by which the system will compress a chunk
automatically in the background after it reaches a given age.
# add_compression_policy() <tag type="community" content="community" />
Allows you to set a policy by which the system compresses a chunk
automatically in the background after it reaches a given age.

Note that compression policies can only be created on hypertables that already
have compression enabled, e.g., via the [`ALTER TABLE`][compression_alter-table] command
Expand All @@ -11,7 +11,7 @@ to set `timescaledb.compress` and other configuration parameters.
|Name|Type|Description|
|---|---|---|
| `hypertable` |REGCLASS| Name of the hypertable|
| `compress_after` | INTERVAL or INTEGER | The age after which the policy job will compress chunks|
| `compress_after` | INTERVAL or INTEGER | The age after which the policy job compresses chunks|

The `compress_after` parameter should be specified differently depending on the type of the time column of the hypertable:
- For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time interval should be an INTERVAL type.
Expand All @@ -22,9 +22,9 @@ the [integer_now_func][set_integer_now_func] to be set).

|Name|Type|Description|
|---|---|---|
| `if_not_exists` | BOOLEAN | Setting to true will cause the command to fail with a warning instead of an error if a compression policy already exists on the hypertable. Defaults to false.|
| `if_not_exists` | BOOLEAN | Setting to true causes the command to fail with a warning instead of an error if a compression policy already exists on the hypertable. Defaults to false.|

### Sample Usage
### Sample Usage
Add a policy to compress chunks older than 60 days on the 'cpu' hypertable.

``` sql
Expand All @@ -39,4 +39,4 @@ SELECT add_compression_policy('table_with_bigint_time', BIGINT '600000');


[compression_alter-table]: /api/:currentVersion:/compression/alter_table_compression/
[set_integer_now_func]: /hypertable/set_integer_now_func
[set_integer_now_func]: /hypertable/set_integer_now_func
22 changes: 11 additions & 11 deletions api/add_data_node.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
## add_data_node() <tag type="community">Community</tag>

Add a new data node on the access node to be used by distributed
hypertables. The data node will automatically be used by distributed
hypertables. The data node is automatically used by distributed
hypertables that are created after the data node has been added, while
existing distributed hypertables require an additional
[`attach_data_node`](/distributed-hypertables/attach_data_node).

If the data node already exists, the command will abort with either an
If the data node already exists, the command aborts with either an
error or a notice depending on the value of `if_not_exists`.

For security purposes, only superusers or users with necessary
privileges can add data nodes (see below for details). When adding a
data node, the access node will also try to connect to the data node
data node, the access node also tries to connect to the data node
and therefore needs a way to authenticate with it. TimescaleDB
currently supports several different such authentication methods for
flexibility (including trust, user mappings, password, and certificate
methods). Please refer to [Setting up Multi-Node
TimescaleDB][multinode] for more information about node-to-node
authentication.

Unless `bootstrap` is false, the function will attempt to bootstrap
Unless `bootstrap` is false, the function attempts to bootstrap
the data node by:
1. Creating the database given in `database` that will serve as the
1. Creating the database given in `database` that serve as the
new data node.
2. Loading the TimescaleDB extension in the new database.
3. Setting metadata to make the data node part of the distributed
Expand All @@ -43,13 +43,13 @@ after it is added.

| Name | Description |
|----------------------|-------------------------------------------------------|
| `database` | Database name where remote hypertables will be created. The default is the current database name. |
| `database` | Database name where remote hypertables are created. The default is the current database name. |
| `port` | Port to use on the remote data node. The default is the PostgreSQL port used by the access node on which the function is executed. |
| `if_not_exists` | Do not fail if the data node already exists. The default is `FALSE`. |
| `bootstrap` | Bootstrap the remote data node. The default is `TRUE`. |
| `password` | Password for authenticating with the remote data node during bootstrapping or validation. A password only needs to be provided if the data node requires password authentication and a password for the user does not exist in a local password file on the access node. If password authentication is not used, the specified password will be ignored. |
| `password` | Password for authenticating with the remote data node during bootstrapping or validation. A password only needs to be provided if the data node requires password authentication and a password for the user does not exist in a local password file on the access node. If password authentication is not used, the specified password is ignored. |

### Returns
### Returns

| Column | Description |
|---------------------|---------------------------------------------------|
Expand All @@ -63,7 +63,7 @@ after it is added.

#### Errors

An error will be given if:
An error is given if:
* The function is executed inside a transaction.
* The function is executed in a database that is already a data node.
* The data node already exists and `if_not_exists` is `FALSE`.
Expand All @@ -87,7 +87,7 @@ Note, however, that superuser privileges might still be necessary on
the data node in order to bootstrap it, including creating the
TimescaleDB extension on the data node unless it is already installed.

### Sample Usage
### Sample Usage

Let's assume that you have an existing hypertable `conditions` and
want to use `time` as the time partitioning column and `location` as
Expand All @@ -111,4 +111,4 @@ SELECT create_distributed_hypertable('conditions', 'time', 'location');
```

Note that this does not offer any performance advantages over using a
regular hypertable, but it can be useful for testing.
regular hypertable, but it can be useful for testing.
4 changes: 2 additions & 2 deletions api/add_dimension.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,8 @@ queries.
| `created` | BOOLEAN | True if the dimension was added, false when `if_not_exists` is true and no dimension was added. |

When executing this function, either `number_partitions` or
`chunk_time_interval` must be supplied, which will dictate if the
dimension will use hash or interval partitioning.
`chunk_time_interval` must be supplied, which dictates if the
dimension uses hash or interval partitioning.

The `chunk_time_interval` should be specified as follows:

Expand Down
2 changes: 1 addition & 1 deletion api/add_job.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ multiple example actions.

|Name|Type|Description|
|---|---|---|
| `config` | JSONB | Job-specific configuration (this will be passed to the function when executed) |
| `config` | JSONB | Job-specific configuration (this is passed to the function when executed) |
| `initial_start` | TIMESTAMPTZ | Time of first execution of job |
| `scheduled` | BOOLEAN | Set to `FALSE` to exclude this job from scheduling. Defaults to `TRUE`. |

Expand Down
10 changes: 5 additions & 5 deletions api/add_reorder_policy.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
## add_reorder_policy() <tag type="community">Community</tag>
## add_reorder_policy() <tag type="community">Community</tag>
Create a policy to reorder chunks on a given hypertable index in the
background. (See [reorder_chunk](/hypertable/reorder_chunk)). Only one reorder policy may
exist per hypertable. Only chunks that are the 3rd from the most recent will be
exist per hypertable. Only chunks that are the 3rd from the most recent are
reordered to avoid reordering chunks that are still being inserted into.

<highlight type="tip">
Once a chunk has been reordered by the background worker it will not be
Once a chunk has been reordered by the background worker it is not
reordered again. So if one were to insert significant amounts of data in to
older chunks that have already been reordered, it might be necessary to manually
re-run the [reorder_chunk](/api/latest/hypertable/reorder_chunk) function on older chunks, or to drop
Expand All @@ -25,14 +25,14 @@ and re-create the policy if many older chunks have been affected.
|---|---|---|
| `if_not_exists` | BOOLEAN | Set to true to avoid throwing an error if the reorder_policy already exists. A notice is issued instead. Defaults to false. |

### Returns
### Returns

|Column|Type|Description|
|---|---|---|
|`job_id`| INTEGER | TimescaleDB background job id created to implement this policy|


### Sample Usage
### Sample Usage


```sql
Expand Down
20 changes: 10 additions & 10 deletions api/add_retention_policy.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
## add_retention_policy() <tag type="community">Community</tag>
## add_retention_policy() <tag type="community">Community</tag>

Create a policy to drop chunks older than a given interval of a particular
hypertable or continuous aggregate on a schedule in the background. (See [drop_chunks](/hypertable/drop_chunks)).
This implements a data retention policy and will remove data on a schedule. Only
This implements a data retention policy and removes data on a schedule. Only
one retention policy may exist per hypertable.

### Required Arguments

|Name|Type|Description|
|---|---|---|
| `relation` | REGCLASS | Name of the hypertable or continuous aggregate to create the policy for. |
| `drop_after` | INTERVAL or INTEGER | Chunks fully older than this interval when the policy is run will be dropped|
| `drop_after` | INTERVAL or INTEGER | Chunks fully older than this interval when the policy is run are dropped|

The `drop_after` parameter should be specified differently depending on the
The `drop_after` parameter should be specified differently depending on the
type of the time column of the hypertable:
- For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time
- For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time
interval should be an INTERVAL type.
- For hypertables with integer-based timestamps: the time interval should be an
- For hypertables with integer-based timestamps: the time interval should be an
integer type (this requires the [integer_now_func](/hypertable/set_integer_now_func) to be set).

### Optional Arguments
Expand All @@ -25,20 +25,20 @@ integer type (this requires the [integer_now_func](/hypertable/set_integer_now_f
|---|---|---|
| `if_not_exists` | BOOLEAN | Set to true to avoid throwing an error if the drop_chunks_policy already exists. A notice is issued instead. Defaults to false. |

### Returns
### Returns

|Column|Type|Description|
|---|---|---|
|`job_id`| INTEGER | TimescaleDB background job id created to implement this policy|

### Sample Usage
### Sample Usage

Create a data retention policy to discard chunks greater than 6 months old:
```sql
SELECT add_retention_policy('conditions', INTERVAL '6 months');
```

Create a data retention policy with an integer-based time column:
Create a data retention policy with an integer-based time column:
```sql
SELECT add_retention_policy('conditions', BIGINT '600000');
```
```
28 changes: 14 additions & 14 deletions api/alter_job.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
## alter_job() <tag type="community">Community</tag>
## alter_job() <tag type="community">Community</tag>

Actions scheduled via TimescaleDB's automation framework run periodically in a
background worker. You can change the schedule of their execution using `alter_job`.
To alter an existing job, you must refer to it by `job_id`.
The `job_id` which executes a given action and its current schedule can be found
either in the `timescaledb_information.jobs` view, which lists information
either in the `timescaledb_information.jobs` view, which lists information
about every scheduled action, as well as in `timescaledb_information.job_stats`.
The `job_stats` view additionally contains information about when each job was
last run and other useful statistics for deciding what the new schedule should be.
Expand All @@ -20,28 +20,28 @@ last run and other useful statistics for deciding what the new schedule should b
|Name|Type|Description|
|---|---|---|
| `schedule_interval` | INTERVAL | The interval at which the job runs |
| `max_runtime` | INTERVAL | The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped |
| `max_retries` | INTEGER | The number of times the job will be retried should it fail |
| `retry_period` | INTERVAL | The amount of time the scheduler will wait between retries of the job on failure |
| `max_runtime` | INTERVAL | The maximum amount of time the job is allowed to run by the background worker scheduler before it is stopped |
| `max_retries` | INTEGER | The number of times the job is retried should it fail |
| `retry_period` | INTERVAL | The amount of time the scheduler waits between retries of the job on failure |
| `scheduled` | BOOLEAN | Set to `FALSE` to exclude this job from being run as background job. |
| `config` | JSONB | Job-specific configuration (this will be passed to the function when executed)|
| `config` | JSONB | Job-specific configuration (this is passed to the function when executed)|
| `next_start` | TIMESTAMPTZ | The next time at which to run the job. The job can be paused by setting this value to 'infinity' (and restarted with a value of now()). |
| `if_exists` | BOOLEAN | Set to true to avoid throwing an error if the job does not exist, a notice will be issued instead. Defaults to false. |
| `if_exists` | BOOLEAN | Set to true to avoid throwing an error if the job does not exist, a notice is issued instead. Defaults to false. |

### Returns
### Returns

|Column|Type|Description|
|---|---|---|
| `job_id` | INTEGER | the id of the job being modified |
| `schedule_interval` | INTERVAL | The interval at which the job runs |
| `max_runtime` | INTERVAL | The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped |
| `max_retries` | INTEGER | The number of times the job will be retried should it fail |
| `retry_period` | INTERVAL | The amount of time the scheduler will wait between retries of the job on failure |
| `scheduled` | BOOLEAN | True if this job will be executed by the TimescaleDB scheduler. |
| `config` | JSONB | Job-specific configuration (this will be passed to the function when executed)|
| `max_runtime` | INTERVAL | The maximum amount of time the job is allowed to run by the background worker scheduler before it is stopped |
| `max_retries` | INTEGER | The number of times the job is retried should it fail |
| `retry_period` | INTERVAL | The amount of time the scheduler waits between retries of the job on failure |
| `scheduled` | BOOLEAN | True if this job is executed by the TimescaleDB scheduler. |
| `config` | JSONB | Job-specific configuration (this is passed to the function when executed)|
| `next_start` | TIMESTAMPTZ | The next time at which to run the job. |

### Sample Usage
### Sample Usage

```sql
SELECT alter_job(1000, schedule_interval => INTERVAL '2 days');
Expand Down
10 changes: 5 additions & 5 deletions api/alter_table_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ ALTER TABLE <table_name> SET (timescaledb.compress, timescaledb.compress_orderby
timescaledb.compress_segmentby = '<column_name> [, ...]'
);
```
#### Required Options
#### Required Options
|Name|Type|Description|
|---|---|---|
| `timescaledb.compress` | BOOLEAN | Enable/Disable compression |

#### Other Options
#### Other Options
|Name|Type|Description|
|---|---|---|
| `timescaledb.compress_orderby` | TEXT |Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable's time column. |
Expand All @@ -24,12 +24,12 @@ timescaledb.compress_segmentby = '<column_name> [, ...]'
### Parameters
|Name|Type|Description|
|---|---|---|
| `table_name` | TEXT |Hypertable that will support compression |
| `table_name` | TEXT |Hypertable that supports compression |
| `column_name` | TEXT | Column used to order by and/or segment by |

### Sample Usage
### Sample Usage
Configure a hypertable that ingests device data to use compression.

```sql
ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC', timescaledb.compress_segmentby = 'device_id');
```
```
8 changes: 4 additions & 4 deletions api/approximate_row_count.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
## approximate_row_count()
## approximate_row_count()

Get approximate row count for hypertable, distributed hypertable, or regular PostgreSQL table based on catalog estimates.
This function support tables with nested inheritance and declarative partitioning.

The accuracy of approximate_row_count depends on the database having up-to-date statistics about the table or hypertable, which are updated by VACUUM, ANALYZE, and a few DDL commands. If you have auto-vacuum configured on your table or hypertable, or changes to the table are relatively infrequent, you might not need to explicitly ANALYZE your table as shown below. Otherwise, if your table statistics are too out-of-date, running this command will update your statistics and yield more accurate approximation results.
The accuracy of approximate_row_count depends on the database having up-to-date statistics about the table or hypertable, which are updated by VACUUM, ANALYZE, and a few DDL commands. If you have auto-vacuum configured on your table or hypertable, or changes to the table are relatively infrequent, you might not need to explicitly ANALYZE your table as shown below. Otherwise, if your table statistics are too out-of-date, running this command updates your statistics and yield more accurate approximation results.

### Required Arguments

|Name|Type|Description|
|---|---|---|
| `relation` | REGCLASS | Hypertable or regular PostgreSQL table to get row count for. |

### Sample Usage
### Sample Usage

Get the approximate row count for a single hypertable.
```sql
Expand All @@ -25,4 +25,4 @@ The expected output:
approximate_row_count
----------------------
240000
```
```
Loading

0 comments on commit 7296f80

Please sign in to comment.