From 040115b21150d5baa78f5e87bfa0e3dd9ae0c888 Mon Sep 17 00:00:00 2001
From: Chris Cho <52428683+ccho-mongodb@users.noreply.github.com>
Date: Fri, 20 Dec 2024 13:12:15 -0500
Subject: [PATCH] Fix various typos
---
website/docs/docs/build/packages.md | 2 +-
.../docs/cloud-integrations/about-snowflake-native-app.md | 2 +-
website/docs/docs/cloud/git/setup-azure.md | 2 +-
website/docs/docs/cloud/manage-access/external-oauth.md | 2 +-
website/docs/docs/cloud/manage-access/invite-users.md | 2 +-
website/docs/docs/cloud/manage-access/mfa.md | 2 +-
website/docs/docs/collaborate/data-tile.md | 2 +-
website/docs/docs/collaborate/explore-multiple-projects.md | 2 +-
.../docs/core/connect-data-platform/azuresynapse-setup.md | 2 +-
.../docs/docs/core/connect-data-platform/ibmdb2-setup.md | 2 +-
website/docs/docs/core/connect-data-platform/layer-setup.md | 2 +-
.../docs/docs/core/connect-data-platform/postgres-setup.md | 4 ++--
website/docs/docs/core/connect-data-platform/spark-setup.md | 2 +-
.../docs/docs/core/connect-data-platform/upsolver-setup.md | 2 +-
website/docs/docs/dbt-cloud-apis/authentication.md | 2 +-
website/docs/docs/dbt-versions/2022-release-notes.md | 4 ++--
website/docs/docs/dbt-versions/2023-release-notes.md | 6 +++---
.../docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md | 2 +-
.../docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md | 2 +-
.../core-upgrade/11-Older versions/upgrading-to-0-16-0.md | 2 +-
.../core-upgrade/11-Older versions/upgrading-to-0-17-0.md | 2 +-
.../release-notes/98-dbt-cloud-changelog-2021.md | 2 +-
.../release-notes/99-dbt-cloud-changelog-2019-2020.md | 2 +-
website/docs/docs/deploy/merge-jobs.md | 2 +-
website/docs/docs/deploy/webhooks.md | 2 +-
25 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md
index 82ba2c3d74c..7a2c08d3e70 100644
--- a/website/docs/docs/build/packages.md
+++ b/website/docs/docs/build/packages.md
@@ -161,7 +161,7 @@ Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's creat
### Native private packages
-dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an exisiting [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos.
+dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an existing [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos.
#### Prerequisites
diff --git a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
index 86ee6a7d630..9eb1179897e 100644
--- a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
+++ b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
@@ -46,7 +46,7 @@ App users are able to access all information that's available to the API service
## Procurement
The dbt Snowflake Native App is available on the [Snowflake Marketplace](https://app.snowflake.com/marketplace/listing/GZTYZSRT2R3). Purchasing it includes access to the Native App and a dbt Cloud account that's on the Enterprise plan. Existing dbt Cloud Enterprise customers can also access it. If interested, contact your Enterprise account manager.
-If you're interested, please [contact us](matilto:sales_snowflake_marketplace@dbtlabs.com) for more information.
+If you're interested, please [contact us](mailto:sales_snowflake_marketplace@dbtlabs.com) for more information.
## Support
If you have any questions about the dbt Snowflake Native App, you may [contact our Support team](mailto:dbt-snowflake-marketplace@dbtlabs.com) for help. Please provide information about your installation of the Native App, including your dbt Cloud account ID and Snowflake account identifier.
diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md
index 273660ba3dd..c6213b49453 100644
--- a/website/docs/docs/cloud/git/setup-azure.md
+++ b/website/docs/docs/cloud/git/setup-azure.md
@@ -155,7 +155,7 @@ The service user's permissions will also power which repositories a team can sel
While it's common to enforce multi-factor authentication (MFA) for normal user accounts, service user authentication must not need an extra factor. If you enable a second factor for the service user, this can interrupt production runs and cause a failure to clone the repository. In order for the OAuth access token to work, the best practice is to remove any more burden of proof of identity for service users.
-As a result, MFA must be explicity disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended.
+As a result, MFA must be explicitly disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended.
**To disable MFA for a single user using the Office 365 Administration console:**
diff --git a/website/docs/docs/cloud/manage-access/external-oauth.md b/website/docs/docs/cloud/manage-access/external-oauth.md
index 380d0a3d1cc..c25b44d1513 100644
--- a/website/docs/docs/cloud/manage-access/external-oauth.md
+++ b/website/docs/docs/cloud/manage-access/external-oauth.md
@@ -144,7 +144,7 @@ Adjust the other settings as needed to meet your organization's configurations i
1. Navigate back to the dbt Cloud **Account settings** —> **Integrations** page you were on at the beginning. It’s time to start filling out all of the fields.
1. `Integration name`: Give the integration a descriptive name that includes identifying information about the Okta environment so future users won’t have to guess where it belongs.
2. `Client ID` and `Client secrets`: Retrieve these from the Okta application page.
-
+
3. Authorize URL and Token URL: Found in the metadata URI.
diff --git a/website/docs/docs/cloud/manage-access/invite-users.md b/website/docs/docs/cloud/manage-access/invite-users.md
index 0922b4dc991..b9a12bae7c6 100644
--- a/website/docs/docs/cloud/manage-access/invite-users.md
+++ b/website/docs/docs/cloud/manage-access/invite-users.md
@@ -66,7 +66,7 @@ Once the user completes this process, their email and user information will popu
* Is there a limit to the number of users I can invite? _Your ability to invite users is limited to the number of licenses you have available._
* Why are users are clicking the invitation link and getting an `Invalid Invitation Code` error? _We have seen scenarios where embedded secure link technology (such as enterprise Outlooks [Safe Link](https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-links-about?view=o365-worldwide) feature) can result in errors when clicking on the email link. Be sure to include the `getdbt.com` URL in the allowlists for these services._
-* Can I have a mixure of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._
+* Can I have a mixture of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._
* What happens if I need to resend the invitation? _From the Users page, click on the invite record, and you will be presented with the option to resend the invitation._
* What can I do if I entered an email address incorrectly? _From the Users page, click on the invite record, and you will be presented with the option to revoke it. Once revoked, generate a new invitation to the correct email address._
diff --git a/website/docs/docs/cloud/manage-access/mfa.md b/website/docs/docs/cloud/manage-access/mfa.md
index bcddc04f072..644fcdb32c2 100644
--- a/website/docs/docs/cloud/manage-access/mfa.md
+++ b/website/docs/docs/cloud/manage-access/mfa.md
@@ -58,7 +58,7 @@ Choose the next steps based on your preferred enrollment selection:
2. Follow the instructions in the modal window and click **Use security key**.
-
+
3. Scan the QR code or insert and touch activate your USB key to begin the process. Follow the on-screen prompts.
diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md
index 0edd9d7c44e..077a4f5a740 100644
--- a/website/docs/docs/collaborate/data-tile.md
+++ b/website/docs/docs/collaborate/data-tile.md
@@ -63,7 +63,7 @@ Follow these steps to set up your data health tile:
6. Navigate back to dbt Explorer and select an exposure.
7. Below the **Data health** section, expand on the toggle for instructions on how to embed the exposure tile (if you're an account admin with develop permissions).
8. In the expanded toggle, you'll see a text field where you can paste your **Metadata Only token**.
-
+
9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to add to your dashboard.
diff --git a/website/docs/docs/collaborate/explore-multiple-projects.md b/website/docs/docs/collaborate/explore-multiple-projects.md
index b15e133a49e..3a0cce8a9e6 100644
--- a/website/docs/docs/collaborate/explore-multiple-projects.md
+++ b/website/docs/docs/collaborate/explore-multiple-projects.md
@@ -27,7 +27,7 @@ When viewing a downstream (child) project that imports and refs public models fr
- Clicking on a model opens a side panel containing general information about the model, such as the specific dbt Cloud project that produces that model, description, package, and more.
- Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, if you have the permissions to do so.
-
+
## Explore the project-level lineage graph
diff --git a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
index 0a0347df9ea..0c22209d75c 100644
--- a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
+++ b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
@@ -55,7 +55,7 @@ Microsoft made several changes related to connection encryption. Read more about
### Authentication methods
This adapter is based on the adapter for Microsoft SQL Server.
-Therefor, the same authentication methods are supported.
+Therefore, the same authentication methods are supported.
The configuration is the same except for 1 major difference:
instead of specifying `type: sqlserver`, you specify `type: synapse`.
diff --git a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
index 692342466b0..c9c91d3ef5b 100644
--- a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
+++ b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
@@ -65,7 +65,7 @@ your_profile_name:
| type | The specific adapter to use | Required | `ibmdb2` |
| schema | Specify the schema (database) to build models into | Required | `analytics` |
| database | Specify the database you want to connect to | Required | `testdb` |
-| host | Hostname or IP-adress | Required | `localhost` |
+| host | Hostname or IP-address | Required | `localhost` |
| port | The port to use | Optional | `50000` |
| protocol | Protocol to use | Optional | `TCPIP` |
| username | The username to use to connect to the server | Required | `my-username` |
diff --git a/website/docs/docs/core/connect-data-platform/layer-setup.md b/website/docs/docs/core/connect-data-platform/layer-setup.md
index 051094297a2..9514d6bb9e6 100644
--- a/website/docs/docs/core/connect-data-platform/layer-setup.md
+++ b/website/docs/docs/core/connect-data-platform/layer-setup.md
@@ -83,7 +83,7 @@ _Parameters:_
| Syntax | Description |
| --------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continious outcomes such as CLV prediction. |
+| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continuous outcomes such as CLV prediction. |
| `FEATURES` | Input column names as a list to train your AutoML model. |
| `TARGET` | Target column that you want to predict. |
diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md
index b6f34a00e0b..ef6b42d6236 100644
--- a/website/docs/docs/core/connect-data-platform/postgres-setup.md
+++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md
@@ -68,7 +68,7 @@ The `role` config controls the Postgres role that dbt assumes when opening new c
#### sslmode
-The `sslmode` config controls how dbt connectes to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`.
+The `sslmode` config controls how dbt connects to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`.
#### sslcert
@@ -99,7 +99,7 @@ If `dbt-postgres` encounters an operational error or timeout when opening a new
`psycopg2-binary` is installed by default when installing `dbt-postgres`.
Installing `psycopg2-binary` uses a pre-built version of `psycopg2` which may not be optimized for your particular machine.
This is ideal for development and testing workflows where performance is less of a concern and speed and ease of install is more important.
-However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and archtecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
+However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md
index 611642e91b7..97bba29e66e 100644
--- a/website/docs/docs/core/connect-data-platform/spark-setup.md
+++ b/website/docs/docs/core/connect-data-platform/spark-setup.md
@@ -25,7 +25,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
-If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it seperately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details.
+If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it separately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details.
If connecting to a Spark cluster via the generic thrift or http methods, it requires `PyHive`.
diff --git a/website/docs/docs/core/connect-data-platform/upsolver-setup.md b/website/docs/docs/core/connect-data-platform/upsolver-setup.md
index 8e4203e0b0c..164d46ee8af 100644
--- a/website/docs/docs/core/connect-data-platform/upsolver-setup.md
+++ b/website/docs/docs/core/connect-data-platform/upsolver-setup.md
@@ -10,7 +10,7 @@ meta:
min_core_version: 'v1.5.0'
cloud_support: Not Supported
min_supported_version: 'n/a'
- slack_channel_name: 'Upsolver Comunity'
+ slack_channel_name: 'Upsolver Community'
slack_channel_link: 'https://join.slack.com/t/upsolvercommunity/shared_invite/zt-1zo1dbyys-hj28WfaZvMh4Z4Id3OkkhA'
platform_name: 'Upsolver'
config_page: '/reference/resource-configs/upsolver-configs'
diff --git a/website/docs/docs/dbt-cloud-apis/authentication.md b/website/docs/docs/dbt-cloud-apis/authentication.md
index 43a08d84fd7..e817512c1fc 100644
--- a/website/docs/docs/dbt-cloud-apis/authentication.md
+++ b/website/docs/docs/dbt-cloud-apis/authentication.md
@@ -31,7 +31,7 @@ pagination_prev: null
You should use service tokens broadly for any production workflow where you need a service account. You should use PATs only for developmental workflows _or_ dbt Cloud client workflows that require user context. The following examples show you when to use a personal access token (PAT) or a service token:
-* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal acess token for these integrations today.
+* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal access token for these integrations today.
* **Production Terraform** — Use a service token since this is a production workflow and is acting as a service account and not a user account.
* **Cloud CLI** — Use a PAT since the dbt Cloud CLI works within the context of a user (the user is making the requests and has to operate within the context of their user account).
* **Testing a custom script and staging Terraform or Postman** — We recommend using a PAT as this is a developmental workflow and is scoped to the user making the changes. When you push this script or Terraform into production, use a service token instead.
diff --git a/website/docs/docs/dbt-versions/2022-release-notes.md b/website/docs/docs/dbt-versions/2022-release-notes.md
index b46c259a6d8..f180f664372 100644
--- a/website/docs/docs/dbt-versions/2022-release-notes.md
+++ b/website/docs/docs/dbt-versions/2022-release-notes.md
@@ -51,7 +51,7 @@ packages:
-## Novemver 2022
+## November 2022
### The dbt Cloud + Databricks experience is getting even better
@@ -241,4 +241,4 @@ We started the new year with a gift! Multi-tenant Team and Enterprise accounts c
#### Performance improvements and enhancements
-* We added client-side naming validation for file or folder creation.
\ No newline at end of file
+* We added client-side naming validation for file or folder creation.
diff --git a/website/docs/docs/dbt-versions/2023-release-notes.md b/website/docs/docs/dbt-versions/2023-release-notes.md
index ec635a051dc..4dd10c36b5c 100644
--- a/website/docs/docs/dbt-versions/2023-release-notes.md
+++ b/website/docs/docs/dbt-versions/2023-release-notes.md
@@ -35,7 +35,7 @@ Archived release notes for dbt Cloud from 2023
To learn more, refer to [Extended attributes](/docs/dbt-cloud-environments#extended-attributes).
- The **Extended Atrributes** text box is available from your environment's settings page:
+ The **Extended Attributes** text box is available from your environment's settings page:
@@ -183,7 +183,7 @@ Archived release notes for dbt Cloud from 2023
Previously in dbt Cloud, you could only rerun an errored job from start but now you can also rerun it from its point of failure.
- You can view which job failed to complete successully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs).
+ You can view which job failed to complete successfully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs).
@@ -812,7 +812,7 @@ Archived release notes for dbt Cloud from 2023
--
+-
The dbt Cloud Scheduler now prevents queue clog by canceling unnecessary runs of over-scheduled jobs.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
index 9a4712af528..2a4a9d96528 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
@@ -92,7 +92,7 @@ You can read more about each of these behavior changes in the following links:
- (Introduced, disabled by default) [`skip_nodes_if_on_run_start_fails` project config flag](/reference/global-configs/behavior-changes#behavior-change-flags). If the flag is set and **any** `on-run-start` hook fails, mark all selected nodes as skipped.
- `on-run-start/end` hooks are **always** run, regardless of whether they passed or failed last time.
- (Introduced, disabled by default) [[Redshift] `restrict_direct_pg_catalog_access`](/reference/global-configs/behavior-changes#redshift-restrict_direct_pg_catalog_access). If the flag is set the adapter will use the Redshift API (through the Python client) if available, or query Redshift's `information_schema` tables instead of using `pg_` tables.
-- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not proprly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting.
+- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not properly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting.
- (Introduced, disabled by default) [`require_batched_execution_for_custom_microbatch_strategy`](/reference/global-configs/behavior-changes#custom-microbatch-strategy). Set to `True` if you use a custom microbatch macro to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the microbatch strategy.
## Adapter specific features and functionalities
diff --git a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
index 6139cdcfc6f..11c78bd4bfa 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
@@ -110,7 +110,7 @@ The built-in [collect_freshness](https://github.com/dbt-labs/dbt-core/blob/1.5.l
{{ return(load_result('collect_freshness')) }}
```
-Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommmend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
+Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
Likewise, if your project has reimplemented the `ref` macro with custom logic, you will need to update the logic in your macro as described [here](https://docs.getdbt.com/reference/dbt-jinja-functions/builtins).
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
index d6fc6f9f49a..d610cdb4455 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
@@ -80,7 +80,7 @@ The `snowflake__list_schemas` macro should now return an Agate dataframe with a
column named `"name"`. If you are overriding the `snowflake__list_schemas` macro in your
project, you can find more information about this change in [this pull request](https://github.com/dbt-labs/dbt-core/pull/2171).
-### Snowflake databases wih 10,000 schemas
+### Snowflake databases with 10,000 schemas
dbt no longer supports running against Snowflake databases containing more than
10,000 schemas. This is due limitations of the `show schemas in database` query
that dbt now uses to find schemas in a Snowflake database. If your dbt project
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
index 6a19bdcf808..00d6a70bd05 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
@@ -237,7 +237,7 @@ modules, please be mindful of the following changes to dbt's Python
dependencies:
Core:
-- Pinned `Jinja2` depdendency to `2.11.2`
+- Pinned `Jinja2` dependency to `2.11.2`
- Pinned `hologram` to `0.0.7`
- Require Python >= `3.6.3`
diff --git a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
index 996229807a1..f4ea44c6b95 100644
--- a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
+++ b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
@@ -326,7 +326,7 @@ Rolling out a few long-term bets to ensure that our beloved dbt Cloud does not f
- Fix NoSuchKey error
- Guarantee unique notification settings per account, user, and type
- Fix for account notification settings
-- Dont show deleted projects on notifications page
+- Don't show deleted projects on notifications page
- Fix unicode error while decoding last_chunk
- Show more relevant errors to customers
- Groups are now editable by non-sudo requests
diff --git a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
index a6b68cf9d51..32a33d95301 100644
--- a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
+++ b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
@@ -464,7 +464,7 @@ This release adds a new version of dbt (0.16.1), fixes a number of IDE bugs, and
- Fixed issue preventing temporary PR schemas from being dropped when PR is closed.
- Fix issues with IDE tabs not updating query compile and run results.
- Fix issues with query runtime timer in IDE for compile and run query functions.
-- Fixed what settings are displayed on the account settings page to allign with the user's permissions.
+- Fixed what settings are displayed on the account settings page to align with the user's permissions.
- Fixed bug with checking user's permissions in frontend when user belonged to more than one project.
- Fixed bug with access control around environments and file system/git interactions that occurred when using IDE.
- Fixed a bug with Environments too generously matching repository.
diff --git a/website/docs/docs/deploy/merge-jobs.md b/website/docs/docs/deploy/merge-jobs.md
index a187e3992f8..e148498ed01 100644
--- a/website/docs/docs/deploy/merge-jobs.md
+++ b/website/docs/docs/deploy/merge-jobs.md
@@ -20,7 +20,7 @@ By using CD in dbt Cloud, you can take advantage of deferral to build only the e
1. On your deployment environment page, click **Create job** > **Merge job**.
1. Options in the **Job settings** section:
- **Job name** — Specify the name for the merge job.
- - **Description** — Provide a descripion about the job.
+ - **Description** — Provide a description about the job.
- **Environment** — By default, it’s set to the environment you created the job from.
1. In the **Git trigger** section, the **Run on merge** option is enabled by default. Every time a PR merges (to a base
branch configured in the environment) in your Git repo, this job will get triggered to run.
diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md
index 52ce2a1fe56..4ff9c350344 100644
--- a/website/docs/docs/deploy/webhooks.md
+++ b/website/docs/docs/deploy/webhooks.md
@@ -217,7 +217,7 @@ GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription
{
"id": "wsu_12345abcde",
"account_identifier": "act_12345abcde",
- "name": "Notication Webhook",
+ "name": "Notification Webhook",
"description": "Webhook used to trigger notifications in Slack",
"job_ids": [],
"event_types": [