From 3167d6e5cef5f9457e91568176a70f5ec351d528 Mon Sep 17 00:00:00 2001 From: DjP-iX <133042991+DjP-iX@users.noreply.github.com> Date: Wed, 24 Apr 2024 11:55:38 -0400 Subject: [PATCH 1/2] Fix reported errors --- content/SCALE/SCALECLIReference/System/CLIBootenv.md | 2 +- content/SCALE/SCALETutorials/Apps/CommunityApps/Immich.md | 2 +- content/SCALE/SCALETutorials/Apps/CommunityApps/jellyfin.md | 2 +- content/SCALE/SCALETutorials/Shares/MixedModeShares.md | 2 +- .../SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md | 2 +- .../SCALETutorials/SystemSettings/Advanced/ManageGPUSCALE.md | 2 +- content/SCALE/SCALEUIReference/Apps/InstallCustomAppScreens.md | 2 +- content/Solutions/Integrations/VMware/DeployTNinVMWare.md | 2 +- content/TrueCommand/TCGettingStarted/UserAccounts.md | 2 +- static/includes/ClusterSetup.md | 2 +- 10 files changed, 10 insertions(+), 10 deletions(-) diff --git a/content/SCALE/SCALECLIReference/System/CLIBootenv.md b/content/SCALE/SCALECLIReference/System/CLIBootenv.md index 36e6a3a995..df30fec8bd 100644 --- a/content/SCALE/SCALECLIReference/System/CLIBootenv.md +++ b/content/SCALE/SCALECLIReference/System/CLIBootenv.md @@ -33,7 +33,7 @@ Use [`query`](#query-command) to find the boot environment `id`. `activate` returns `true` when successful. `query` returns `active` values of `N` (now) for the current boot environment and `R` (reboot) for the pending one. -Enter [`system reboot`]({{< relref "/scale/scaleclireference/system/_index.md #reboot-command" >}}) to reboot the system and activate the pending boot environment. +Enter [`system reboot`]({{< relref "/scale/scaleclireference/system/_index.md#reboot-command" >}}) to reboot the system and activate the pending boot environment. #### Usage diff --git a/content/SCALE/SCALETutorials/Apps/CommunityApps/Immich.md b/content/SCALE/SCALETutorials/Apps/CommunityApps/Immich.md index f29bccf4fd..4afe5edc46 100644 --- a/content/SCALE/SCALETutorials/Apps/CommunityApps/Immich.md +++ b/content/SCALE/SCALETutorials/Apps/CommunityApps/Immich.md @@ -146,7 +146,7 @@ Enter a plain integer followed by the measurement suffix, for example 4G or 123M Systems with compatible GPU(s) display devices in **GPU Configuration**. Use the **GPU Resource** dropdown menu(s) to configure device allocation. -See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md #allocating-gpu" >}}) for more information about allocating GPU devices in TrueNAS SCALE. +See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md#allocating-gpu" >}}) for more information about allocating GPU devices in TrueNAS SCALE. ## Immich Database Backup and Restore diff --git a/content/SCALE/SCALETutorials/Apps/CommunityApps/jellyfin.md b/content/SCALE/SCALETutorials/Apps/CommunityApps/jellyfin.md index b3bbe62801..641cd5034d 100644 --- a/content/SCALE/SCALETutorials/Apps/CommunityApps/jellyfin.md +++ b/content/SCALE/SCALETutorials/Apps/CommunityApps/jellyfin.md @@ -204,4 +204,4 @@ Enter a plain integer followed by the measurement suffix, for example 4G. Systems with compatible GPU(s) display devices in **GPU Configuration**. Use the **GPU Resource** dropdown menu(s) to configure device allocation. -See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md #allocating-gpu" >}}) for more information about allocating GPU devices in TrueNAS SCALE. +See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md#allocating-gpu" >}}) for more information about allocating GPU devices in TrueNAS SCALE. diff --git a/content/SCALE/SCALETutorials/Shares/MixedModeShares.md b/content/SCALE/SCALETutorials/Shares/MixedModeShares.md index 0346109f12..bc9208792f 100644 --- a/content/SCALE/SCALETutorials/Shares/MixedModeShares.md +++ b/content/SCALE/SCALETutorials/Shares/MixedModeShares.md @@ -178,4 +178,4 @@ After setting the dataset permission, connect to the share. ### Connecting to a Multiprotocol Share After creating and configuring the shares, connect to the mulit-protocol share using either SMB or NFS protocols from a variety of client operating systems including Windows, Apple, FreeBSD, and Linux/Unix systems. -For more information on accessing shares, see [Mounting the SMB Share]({{< relref "/SCALE/SCALETutorials/Shares/_index.md #mounting-the-smb-share" >}}) and [Connecting to the NFS Share]({{< relref "AddingNFSShares.md #connecting-to-the-nfs-share" >}}). +For more information on accessing shares, see [Mounting the SMB Share]({{< relref "/SCALE/SCALETutorials/Shares/_index.md#mounting-the-smb-share" >}}) and [Connecting to the NFS Share]({{< relref "AddingNFSShares.md #connecting-to-the-nfs-share" >}}). diff --git a/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md b/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md index 29323d24b8..32bfb6a41f 100644 --- a/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md +++ b/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md @@ -145,6 +145,6 @@ The `storage disk resize` command supports SAS, SATA, SAT (interposer) and NVMe f. Go to **Review** and click **Update Pool** -See [Managing Pools]({{< relref "/SCALE/scaletutorials/storage/managepoolsscale.md #adding-a-vdev-using-pool-manager" >}}) for more information on using **Add Vdevs to Pool**. +See [Managing Pools]({{< relref "/SCALE/scaletutorials/storage/managepoolsscale.md#adding-a-vdev-using-pool-manager" >}}) for more information on using **Add Vdevs to Pool**. {{< /expand >}} diff --git a/content/SCALE/SCALETutorials/SystemSettings/Advanced/ManageGPUSCALE.md b/content/SCALE/SCALETutorials/SystemSettings/Advanced/ManageGPUSCALE.md index d64c9c9a61..6f42c2e644 100644 --- a/content/SCALE/SCALETutorials/SystemSettings/Advanced/ManageGPUSCALE.md +++ b/content/SCALE/SCALETutorials/SystemSettings/Advanced/ManageGPUSCALE.md @@ -10,7 +10,7 @@ tags: --- Systems with more than one graphics processing unit (GPU) installed can isolate additional GPU device(s) from the host operating system (OS) and allocate them for use by a virtual machine (VM). -Isolated GPU devices are unavailable to the OS and for [allocation to applications]({{< relref "/scale/scaletutorials/apps/_index.md #allocating-gpu" >}}). +Isolated GPU devices are unavailable to the OS and for [allocation to applications]({{< relref "/scale/scaletutorials/apps/_index.md#allocating-gpu" >}}). {{< include file="/static/includes/AdvancedSettingsWarningSCALE.md" >}} diff --git a/content/SCALE/SCALEUIReference/Apps/InstallCustomAppScreens.md b/content/SCALE/SCALEUIReference/Apps/InstallCustomAppScreens.md index b24354b962..6a125bd99b 100644 --- a/content/SCALE/SCALEUIReference/Apps/InstallCustomAppScreens.md +++ b/content/SCALE/SCALEUIReference/Apps/InstallCustomAppScreens.md @@ -258,7 +258,7 @@ For fewer issues, select **Kill existing pods before creating new ones**. Settings only display if the system detects available GPU device(s). Select the number of devices to allocate from the **Select GPU** dropdown list of devices. -See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md #allocating-gpu" >}}) for more information. +See [Allocating GPU]({{< relref "/scale/scaletutorials/apps/_index.md#allocating-gpu" >}}) for more information. ### Resource Limits Settings diff --git a/content/Solutions/Integrations/VMware/DeployTNinVMWare.md b/content/Solutions/Integrations/VMware/DeployTNinVMWare.md index 8910e01e85..e7aa7810a5 100644 --- a/content/Solutions/Integrations/VMware/DeployTNinVMWare.md +++ b/content/Solutions/Integrations/VMware/DeployTNinVMWare.md @@ -173,7 +173,7 @@ When the console opens, it displays the TrueNAS Console Setup screen. {{< trueimage src="/images/VMWareESXi/TrueNASConsoleSetup.png" alt="TrueNAS Console Setup" id="TrueNAS Console Setup" >}} -Follow the installation instructions documented for [SCALE]({{< relref "/SCALE/GettingStarted/Install/InstallingSCALE.md #using-the-truenas-installer" >}}) or [CORE]({{< relref "/CORE/GettingStarted/Install.md #install-process" >}}) to complete the installation of TrueNAS. +Follow the installation instructions documented for [SCALE]({{< relref "/SCALE/GettingStarted/Install/InstallingSCALE.md#using-the-truenas-installer" >}}) or [CORE]({{< relref "/CORE/GettingStarted/Install.md#install-process" >}}) to complete the installation of TrueNAS. ## Editing the Virtual Machine diff --git a/content/TrueCommand/TCGettingStarted/UserAccounts.md b/content/TrueCommand/TCGettingStarted/UserAccounts.md index 9c1eee22e9..42f16a75e9 100644 --- a/content/TrueCommand/TCGettingStarted/UserAccounts.md +++ b/content/TrueCommand/TCGettingStarted/UserAccounts.md @@ -61,7 +61,7 @@ To verify a user email address and set 2FA: You can assign users to existing teams by selecting a team from the **Teams** dropdown to add the user to that team. You can assign users to multiple teams. -For more in-depth information regarding teams, see the [Teams Documentation]({{< relref "/TrueCommand/AdminGuide/Users.md #organizing-users-into-teams" >}}). +For more in-depth information regarding teams, see the [Teams Documentation]({{< relref "/TrueCommand/AdminGuide/Users.md#organizing-users-into-teams" >}}). To limit non-administrative account access to connected systems, configure the **System Access** and/or **System Groups** sections. This requires first configuring [system connections]({{< relref "Systems.md" >}}) and/or system groups in TrueCommand. diff --git a/static/includes/ClusterSetup.md b/static/includes/ClusterSetup.md index 781ed907b9..628aa2ca34 100644 --- a/static/includes/ClusterSetup.md +++ b/static/includes/ClusterSetup.md @@ -60,7 +60,7 @@ The new records appear inside the zone as they save. If not already complete: -1. [Deploy TrueCommand 2.2 or later in a Docker container]({{< relref "/content/TrueCommand/TCGettingStarted/Install/installtcdocker.md" >}}). +1. [Deploy TrueCommand 2.2 or later in a Docker container]({{< relref "InstallTCDocker.md" >}}). The system used for the TrueCommand container cannot be any of the TrueNAS SCALE systems intended for the cluster. 2. Enter the TrueCommand IP address in a browser, and create the first user. From e99eebc621a7e2bac8d093f652b97709b105b315 Mon Sep 17 00:00:00 2001 From: DjP-iX <133042991+DjP-iX@users.noreply.github.com> Date: Wed, 24 Apr 2024 11:56:39 -0400 Subject: [PATCH 2/2] Additional cleanup --- content/SCALE/GettingStarted/Migrate/MigratePrep.md | 2 +- content/SCALE/SCALECLIReference/Storage/CLIScrub.md | 10 +++++----- content/SCALE/SCALECLIReference/System/CLIBoot.md | 12 ++++++------ .../SCALE/SCALECLIReference/System/CLITrueCommand.md | 2 +- .../Apps/CommunityApps/InstallPiHoleApp.md | 2 +- content/SCALE/SCALETutorials/Apps/UsingCustomApp.md | 2 +- content/SCALE/SCALETutorials/ConfigReportsScale.md | 2 +- .../SCALE/SCALETutorials/Shares/MixedModeShares.md | 4 ++-- .../Storage/Disks/SLOGOverprovisionSCALE.md | 2 +- .../SCALE/SCALEUIReference/ReportingScreensSCALE.md | 2 +- content/Solutions/Optimizations/Security.md | 2 +- static/includes/BrowsingSnapshotCollections1.md | 2 +- static/includes/MinIODatasetRequirements.md | 2 +- 13 files changed, 23 insertions(+), 23 deletions(-) diff --git a/content/SCALE/GettingStarted/Migrate/MigratePrep.md b/content/SCALE/GettingStarted/Migrate/MigratePrep.md index 611bd9921c..1b2de0eda0 100644 --- a/content/SCALE/GettingStarted/Migrate/MigratePrep.md +++ b/content/SCALE/GettingStarted/Migrate/MigratePrep.md @@ -85,7 +85,7 @@ CORE Enterprise customers are encouraged to contact Support for assistance with After updating to the latest publicly-available release of CORE and making any changes to CORE user accounts or any other settings download these files and keep them in a safe place and where you can access them if you need to revert to CORE with a clean install using the CORE iso file. After completing the steps that apply to your CORE system listed above, download the [SCALE ISO file](https://www.truenas.com/download-tn-scale/) and save it to your computer. -Burn the iso to a USB drive (see **Installing on Physical Hardware** in [Installing SCALE]({{< relref "InstallingSCALE.md #installing-on-physical-hardware" >}})) when upgrading a physical system. +Burn the iso to a USB drive (see **Installing on Physical Hardware** in [Installing SCALE]({{< relref "InstallingSCALE.md#installing-on-physical-hardware" >}})) when upgrading a physical system. ## Deprecated Services in SCALE The built-in services listed in this section are available in CORE, but deprecated in SCALE 22.12.3 (Bluefin) and removed in later SCALE releases. diff --git a/content/SCALE/SCALECLIReference/Storage/CLIScrub.md b/content/SCALE/SCALECLIReference/Storage/CLIScrub.md index 589eeeb702..41f804898d 100644 --- a/content/SCALE/SCALECLIReference/Storage/CLIScrub.md +++ b/content/SCALE/SCALECLIReference/Storage/CLIScrub.md @@ -30,7 +30,7 @@ The `create` command configures a new scheduled scrub task. `create` has one required property, `pool`, and four optional properties (see Create Properties below). The value for `pool` is the pool id number. -Use [`storage pool query`]({{< relref "CLIPool.md #query-command" >}}) to find the id for the selected pool and enter this integer. +Use [`storage pool query`]({{< relref "CLIPool.md#query-command" >}}) to find the id for the selected pool and enter this integer. Enter the full command string along with any optional properties you want to configure, or accept the default values, and then press Enter. `create` returns an empty line when successful. @@ -40,7 +40,7 @@ Use [`query`](#query-command) to confirm the task is created correctly. {{< truetable >}} | Property | Required | Description | Syntax Example | |----------|----------|-------------|---------------| -| `pool` | Yes | Enter the id number for the selected pool.
Use [`storage pool query`]({{< relref "CLIPool.md #query-command" >}}) to find the id numbers for all pools on the system. | pool=1 | +| `pool` | Yes | Enter the id number for the selected pool.
Use [`storage pool query`]({{< relref "CLIPool.md#query-command" >}}) to find the id numbers for all pools on the system. | pool=1 | | `threshold` | No | Enter the number of days before a completed scrub is allowed to run again. Default value is `35`. This controls the task schedule. For example, scheduling a scrub to run daily and setting Threshold days to 7 means the scrub attempts to run daily. When the scrub succeeds, it continues to check daily but does not run again until the seven days have elapsed. Using a multiple of seven ensures the scrub always occurs on the same weekday. | threshold=7 | | `description` | No | Enter a human-readable name or description for the scrub task. | description= "scrub task 1" | | `schedule` | No | Enter an array of properties that specify the date and time when the scrub task runs. The default setting is to run the task weekly, every Sunday at 00:00 (0 0 * * 0). Enter `{}` without property arguments to accept default values for schedule properties, or enter each property argument enclosed in square brackets with double-quoted properties and values. Separate each array property argument enclosed in square brackets `[]` with a comma and space. Properties are:
Command example shows the default values for each property in the object array. | schedule={["minute"="00"], ["hour"="*"], ["dom"="*"], ["month"="*"], ["dow"="*"]} | @@ -178,7 +178,7 @@ The `run` command activates a one-time scrub task for the selected pool. #### Description `run` has one required property, `name`, and one optional property, `threshold`. -To find the `name` of the pool you want to scrub, use [`storage pool query`]({{< relref "CLIPool.md #query-command" >}}) or [`storage dataset query id`]({{< relref "CLIDataset.md #query-command" >}}) to return the paths of all pools and child datasets on the system. +To find the `name` of the pool you want to scrub, use [`storage pool query`]({{< relref "CLIPool.md#query-command" >}}) or [`storage dataset query id`]({{< relref "CLIDataset.md#query-command" >}}) to return the paths of all pools and child datasets on the system. `threshold` defaults to 35 days. To preserve system resources, the scrub runs only if time since the pool was last scrubbed is greater than the threshold value. @@ -186,7 +186,7 @@ To override the threshold and run immediately, you can use `threshold=0`. Enter the full command string and then press Enter. `run` returns an empty line. -To check if the scrub starts successfully, you can use [`system alert list`]({{< relref "CLIAlert.md #list-command" >}}) to view system alerts. +To check if the scrub starts successfully, you can use [`system alert list`]({{< relref "CLIAlert.md#list-command" >}}) to view system alerts. #### Usage @@ -214,7 +214,7 @@ The `scrub` command allows you to start a one-time scrub task for the selected p #### Description `scrub` has two required properties, `name` and `action`. -To find the `name` of the pool you want to scrub, use [`storage pool query`]({{< relref "CLIPool.md #query-command" >}}) or [`storage dataset query id`]({{< relref "CLIDataset.md #query-command" >}}) to return the paths of all pools and child datasets on the system. +To find the `name` of the pool you want to scrub, use [`storage pool query`]({{< relref "CLIPool.md#query-command" >}}) or [`storage dataset query id`]({{< relref "CLIDataset.md#query-command" >}}) to return the paths of all pools and child datasets on the system. There are three possible values for `action`: diff --git a/content/SCALE/SCALECLIReference/System/CLIBoot.md b/content/SCALE/SCALECLIReference/System/CLIBoot.md index 2387ce5182..845525ac58 100644 --- a/content/SCALE/SCALECLIReference/System/CLIBoot.md +++ b/content/SCALE/SCALECLIReference/System/CLIBoot.md @@ -27,8 +27,8 @@ You can enter commands from the main CLI prompt or from the **boot** namespace p The `attach` command runs a job that attaches a device (disk) to the boot pool. Before running this command, use these commands: -* [`storage disk query`]({{< relref "CLIDisk.md #query-command" >}}) to locate the names and size of disks. -* [`storage disk get_unused`]({{< relref "CLIDisk.md #get_unused-command" >}}) to locate unused disks on the system. +* [`storage disk query`]({{< relref "CLIDisk.md#query-command" >}}) to locate the names and size of disks. +* [`storage disk get_unused`]({{< relref "CLIDisk.md#get_unused-command" >}}) to locate unused disks on the system. * [`system boot get_disks`](#get_disks-command) to get the name of the boot pool disk. {{< expand "Using the Attach Command" "v" >}} @@ -114,7 +114,7 @@ xvda ### Get_Scrub_Interval Command Use the `get_scrub_interval` command to obtain the number of days between boot pool scrubs. -The [`system advanced config`]({{< relref "CLIAdvanced.md #config-command" >}}) result also shows the `boot_scrub` interval. +The [`system advanced config`]({{< relref "CLIAdvanced.md#config-command" >}}) result also shows the `boot_scrub` interval. {{< expand "Using the Get_Scrub_Interval Command" "v" >}} #### Description @@ -182,8 +182,8 @@ system boot get_state Use the `replace` command to remove a device (drive) from the boot pool and replace it with a device of at least the same size. This command resilvers the boot pool and installs the boot loader on the new device. Before running this command, use these commands: -* [`storage disk query`]({{< relref "CLIDisk.md #query-command" >}}) to locate the names and size of disks. -* [`storage disk get_unused`]({{< relref "CLIDisk.md #get_unused-command" >}}) to locate unused disks on the system. +* [`storage disk query`]({{< relref "CLIDisk.md#query-command" >}}) to locate the names and size of disks. +* [`storage disk get_unused`]({{< relref "CLIDisk.md#get_unused-command" >}}) to locate unused disks on the system. * [`system boot get_disks`](#get_disks-command) to get the name of the boot pool disk. {{< expand "Using the Replace Command" "v" >}} @@ -249,7 +249,7 @@ system boot scrub ### Set_Scrub_Interval Command Use the `set_scrub_interval` to set or change the interval (in days) between boot pool scrub operations. -You can also use the [`system advanced update boot_scrub=`]({{< relref "CLIAdvanced.md #update-command" >}}) command to set the boot pool scrub interval. +You can also use the [`system advanced update boot_scrub=`]({{< relref "CLIAdvanced.md#update-command" >}}) command to set the boot pool scrub interval. {{< expand "Using the Set_Scrub_Interval Command" "v" >}} #### Description The `set_scrub_interval` command has one required property, `interval`. diff --git a/content/SCALE/SCALECLIReference/System/CLITrueCommand.md b/content/SCALE/SCALECLIReference/System/CLITrueCommand.md index 13780ac7f6..4771212ed8 100644 --- a/content/SCALE/SCALECLIReference/System/CLITrueCommand.md +++ b/content/SCALE/SCALECLIReference/System/CLITrueCommand.md @@ -82,7 +82,7 @@ system truecommand connected ### Update Command The `update` command allows you to update TrueCommand configuration. -Use [`auth api_key create`]({{< relref "CLIApiKey.md #create-command" >}}) to obtain a new API key. +Use [`auth api_key create`]({{< relref "CLIApiKey.md#create-command" >}}) to obtain a new API key. {{< expand "Using the Update Command" "v" >}} #### Description diff --git a/content/SCALE/SCALETutorials/Apps/CommunityApps/InstallPiHoleApp.md b/content/SCALE/SCALETutorials/Apps/CommunityApps/InstallPiHoleApp.md index ad842cf335..6688313834 100644 --- a/content/SCALE/SCALETutorials/Apps/CommunityApps/InstallPiHoleApp.md +++ b/content/SCALE/SCALETutorials/Apps/CommunityApps/InstallPiHoleApp.md @@ -54,7 +54,7 @@ Click the arrow to the left of **folder /mnt Pi-hole uses volumes store your data between container upgrades. {{< hint type=warning >}} -You need to create these directories in a dataset on SCALE before you begin installing this container. To create a directory, open the TrueNAS SCALE CLI and enter [`storage filesystem mkdir path="/PATH/TO/DIRECTORY"`]({{< relref "CLIFilesystem-Storage.md #mkdir-command" >}}). +You need to create these directories in a dataset on SCALE before you begin installing this container. To create a directory, open the TrueNAS SCALE CLI and enter [`storage filesystem mkdir path="/PATH/TO/DIRECTORY"`]({{< relref "CLIFilesystem-Storage.md#mkdir-command" >}}). {{< /hint >}} ![AppPiHoleStorageSettings](/images/SCALE/Apps/AppPiHoleStorageSettings.png "PiHole Storage Settings") diff --git a/content/SCALE/SCALETutorials/Apps/UsingCustomApp.md b/content/SCALE/SCALETutorials/Apps/UsingCustomApp.md index 3c1b32b81f..20e1d04707 100644 --- a/content/SCALE/SCALETutorials/Apps/UsingCustomApp.md +++ b/content/SCALE/SCALETutorials/Apps/UsingCustomApp.md @@ -32,7 +32,7 @@ If your application requires directory paths, specific datasets, or other storag You cannot exit the configuration wizard and save settings to create data storage or directories in the middle of the process. If you are unsure about any configuration settings, review the [Install Custom App Screen UI reference article]({{< relref "InstallCustomAppScreens.md" >}}) before creating a new container image. -To create directories in a dataset on SCALE, before you begin installing the container, open the TrueNAS SCALE CLI and enter [`storage filesystem mkdir path="/PATH/TO/DIRECTORY"`]({{< relref "CLIFilesystem-Storage.md #mkdir-command" >}}). +To create directories in a dataset on SCALE, before you begin installing the container, open the TrueNAS SCALE CLI and enter [`storage filesystem mkdir path="/PATH/TO/DIRECTORY"`]({{< relref "CLIFilesystem-Storage.md#mkdir-command" >}}). {{< /hint >}} When you are ready to create a container, go to **Apps**, click **Discover Apps**, then click **Custom App**. diff --git a/content/SCALE/SCALETutorials/ConfigReportsScale.md b/content/SCALE/SCALETutorials/ConfigReportsScale.md index 7111746488..f5e07be763 100644 --- a/content/SCALE/SCALETutorials/ConfigReportsScale.md +++ b/content/SCALE/SCALETutorials/ConfigReportsScale.md @@ -49,7 +49,7 @@ To configure a reporting exporter in SCALE, you need the: * Port number the reporting service listens on. If using another TrueNAS system with a reporting application, this is the port number the TrueNAS system listens on (port:80) -For more information on reporting exporter settings, see [Add Reporting Exporter]({{< relref "ReportingScreensSCALE.md #add-reporting-exporter" >}}). +For more information on reporting exporter settings, see [Add Reporting Exporter]({{< relref "ReportingScreensSCALE.md#add-reporting-exporter" >}}). Go to **Reporting** and click on **Exporters** to open the **Reporting Exporters** screen. Any reporting exporters configured on the system display on the **Reporting Exporters** screen. diff --git a/content/SCALE/SCALETutorials/Shares/MixedModeShares.md b/content/SCALE/SCALETutorials/Shares/MixedModeShares.md index bc9208792f..1eac668688 100644 --- a/content/SCALE/SCALETutorials/Shares/MixedModeShares.md +++ b/content/SCALE/SCALETutorials/Shares/MixedModeShares.md @@ -107,7 +107,7 @@ Select **Multiprotocol** from the **Dataset Preset** dropdown. The share configu {{< trueimage src="/images/SCALE/Datasets/AddMultimodeDataset.png" alt="Adding a Multimode Dataset and Share" id="Adding a Multimode Dataset and Share" >}} (Optional) Click **Advanced Options** to customize other dataset settings such as quotas, compression level, encryption, and case sensitivity. -See [Creating Datasets]({{< relref "DatasetsSCALE.md #creating-a-dataset" >}}) for more information on adding and customizing datasets. +See [Creating Datasets]({{< relref "DatasetsSCALE.md#creating-a-dataset" >}}) for more information on adding and customizing datasets. Click **Save**. TrueNAS creates the dataset and the SMB and NFS shares. Next edit both shares. After editing the shares, edit the dataset ACL. @@ -178,4 +178,4 @@ After setting the dataset permission, connect to the share. ### Connecting to a Multiprotocol Share After creating and configuring the shares, connect to the mulit-protocol share using either SMB or NFS protocols from a variety of client operating systems including Windows, Apple, FreeBSD, and Linux/Unix systems. -For more information on accessing shares, see [Mounting the SMB Share]({{< relref "/SCALE/SCALETutorials/Shares/_index.md#mounting-the-smb-share" >}}) and [Connecting to the NFS Share]({{< relref "AddingNFSShares.md #connecting-to-the-nfs-share" >}}). +For more information on accessing shares, see [Mounting the SMB Share]({{< relref "/SCALE/SCALETutorials/Shares/_index.md#mounting-the-smb-share" >}}) and [Connecting to the NFS Share]({{< relref "AddingNFSShares.md#connecting-to-the-nfs-share" >}}). diff --git a/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md b/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md index 32bfb6a41f..de7932f9da 100644 --- a/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md +++ b/content/SCALE/SCALETutorials/Storage/Disks/SLOGOverprovisionSCALE.md @@ -42,7 +42,7 @@ ZFS permits removing and re-adding SLOG disks to an active pool at any time. ## Resizing a Disk to Over-Provision -SCALE uses the [`storage disk resize`]({{< relref "CLIDisk.md #resize-command" >}}) command to change the size of a device. The SCALE UI does not have a UI function for this command yet. +SCALE uses the [`storage disk resize`]({{< relref "CLIDisk.md#resize-command" >}}) command to change the size of a device. The SCALE UI does not have a UI function for this command yet. The `storage disk resize` command supports SAS, SATA, SAT (interposer) and NVMe drives. Power cycle SATA drives before a second resize. 1. Open a shell session using an SSH connection or from the local console. diff --git a/content/SCALE/SCALEUIReference/ReportingScreensSCALE.md b/content/SCALE/SCALEUIReference/ReportingScreensSCALE.md index 4096a253db..4305823f10 100644 --- a/content/SCALE/SCALEUIReference/ReportingScreensSCALE.md +++ b/content/SCALE/SCALEUIReference/ReportingScreensSCALE.md @@ -154,4 +154,4 @@ Additional settings populate based on the selected **Type** option. {{< /truetable >}} {{< /expand >}} -See [Adding a Reporting Exporter]({{< relref "ConfigReportsScale.md #adding-a-reporting-exporter" >}}) for guidance with configuring a Graphite exporter on TrueNAS. +See [Adding a Reporting Exporter]({{< relref "ConfigReportsScale.md#adding-a-reporting-exporter" >}}) for guidance with configuring a Graphite exporter on TrueNAS. diff --git a/content/Solutions/Optimizations/Security.md b/content/Solutions/Optimizations/Security.md index 87866db144..646ecb13d9 100644 --- a/content/Solutions/Optimizations/Security.md +++ b/content/Solutions/Optimizations/Security.md @@ -31,7 +31,7 @@ Check back regularly for updates. Restrict new TrueNAS user accounts ([CORE]({{< relref "SettingUpUsersAndGroups.md" >}}) | [SCALE]({{< relref "ManageLocalUsersSCALE.md" >}})) to the most minimal set of storage ACL permissions and access possible. -On TrueNAS SCALE, [create the administrator account]({{< relref "ManageLocalUsersSCALE.md #creating-an-admin-user-account" >}}) on install and disable root NAS administrative access. +On TrueNAS SCALE, [create the administrator account]({{< relref "ManageLocalUsersSCALE.md#creating-an-admin-user-account" >}}) on install and disable root NAS administrative access. In TrueNAS SCALE 24.04 (Dragonfish) or later, use the **Credentials > Groups > Privileges** screen to define limited access administrative roles, such as read-only or share administrators. Assign users to those groups to grant partial NAS administrative access. Members of privilege groups can access the UI but cannot perform administrative tasks outside those defined by their role(s). diff --git a/static/includes/BrowsingSnapshotCollections1.md b/static/includes/BrowsingSnapshotCollections1.md index ddb6c78602..bc4a25c8df 100644 --- a/static/includes/BrowsingSnapshotCollections1.md +++ b/static/includes/BrowsingSnapshotCollections1.md @@ -23,7 +23,7 @@ To access snapshots: From to the dataset root folder, open the .zfs directory and navigate to the snapshot. * Using the TrueNAS SCALE CLI, enter storage filesystem listdir path="/PATH/TO/DATASET/.zfs/PATH/TO/SNAPSHOT" to view snapshot contents. - See also [`storage filesystem`]({{< relref "clifilesystem-storage.md #listdir-command" >}}). + See also [`storage filesystem`]({{< relref "clifilesystem-storage.md#listdir-command" >}}). {{< expand "Command Example" "v" >}} ``` diff --git a/static/includes/MinIODatasetRequirements.md b/static/includes/MinIODatasetRequirements.md index f101c55063..de80afe52c 100644 --- a/static/includes/MinIODatasetRequirements.md +++ b/static/includes/MinIODatasetRequirements.md @@ -8,7 +8,7 @@ You can use either an existing pool or [create a new one]({{< relref "CreatePool After creating the dataset, create the directory where MinIO stores information the application uses. There are two ways to do this: -* In the TrueNAS SCALE CLI, use [`storage filesystem mkdir path="/PATH/TO/minio/data"`]({{< relref "CLIFilesystem-Storage.md #mkdir-command" >}}) to create the **/data** directory in the MinIO dataset. +* In the TrueNAS SCALE CLI, use [`storage filesystem mkdir path="/PATH/TO/minio/data"`]({{< relref "CLIFilesystem-Storage.md#mkdir-command" >}}) to create the **/data** directory in the MinIO dataset. {{< expand "Command Example" "v" >}} ```