diff --git a/docs/books/admin_guide/03-commands.md b/docs/books/admin_guide/03-commands.md index eb1b5d59bf..838509be4f 100644 --- a/docs/books/admin_guide/03-commands.md +++ b/docs/books/admin_guide/03-commands.md @@ -199,7 +199,7 @@ man 5 passwd will inform him about the files related to the command. -Navigate through the manual with the arrows and . Exit the manual by pressing the q key. +Navigate through the manual with the arrows ++arrow-up++ and ++arrow-down++. Exit the manual by pressing the ++q++ key. ### `shutdown` command @@ -252,21 +252,21 @@ To manipulate the history, the following commands entered from the command promp | Keys | Function | | ------------------ | --------------------------------------------------------- | -| !! | Recalls the last command placed. | -| !n | Recalls the command by its number in the list. | -| !string | Recalls the most recent command beginning with the string. | -| | Navigates through your history working backward in time from the most recent command. | -| | Navigates through your history working forward in time. | +| ++exclam+exclam++ | Recalls the last command placed. | +| ++exclam+n++ | Recalls the command by its number in the list. | +| ++exclam+"string"++ | Recalls the most recent command beginning with the string. | +| ++arrow-up++ | Navigates through your history working backward in time from the most recent command. | +| ++arrow-down++ | Navigates through your history working forward in time. | ### Auto-complete Auto-completion is a great help. * Completes commands, entered paths, or file names. -* Press the TAB key to complete the entry in the case of a single solution. -* In the case of multiple solutions, press TAB a second time to see options. +* Press the ++tab++ key to complete the entry in the case of a single solution. +* In the case of multiple solutions, press ++tab++ a second time to see options. -If double-pressing the TAB key presents no options, then there is no solution to the current completion. +If double-pressing the ++tab++ key presents no options, then there is no solution to the current completion. ## Display and Identification @@ -278,7 +278,7 @@ On a physical terminal, the display will be permanently hidden, whereas in a gra !!! Tip - CTRL + L will have the same effect as the `clear` command + ++control+l++ will have the same effect as the `clear` command ### `echo` command @@ -850,7 +850,7 @@ root:x:0:0:root:/root:/bin/bash ... ``` -Using the ENTER key, the move is line by line. Using the SPACE key, the move is page by page. `/text` allows you to search for the occurrence in the file. +Using the ++enter++ key, the move is line by line. Using the ++space++ key, the move is page by page. `/text` allows you to search for the occurrence in the file. ### `less` command @@ -864,14 +864,14 @@ The commands specific to `less` are: | Command | Action | | ----------------- | ----------------------------------------------- | -| h | Help. | -| | Move up, down a line, or to the right or left. | -| Enter | Move down one line. | -| Space | Move down one page. | -| PgUp and PgDn | Move up or down one page. | -| g and G | Move to the first and last pages | +| ++h++ | Help. | +| ++arrow-up++ ++arrow-down++ ++arrow-right++ ++arrow-left++ | Move up, down a line, or to the right or left. | +| ++enter++ | Move down one line. | +| ++space++ | Move down one page. | +| ++page-up++ and ++page-down++ | Move up or down one page. | +| ++"g"++ and ++g++ | Move to the first and last pages | | `/text` | Search for text. | -| q | Quit the `less` command. | +| ++q++ | Quit the `less` command. | ### `cat` command @@ -971,7 +971,7 @@ tcpdump::x:72:72::/:/sbin/nologin user1:x:500:500:grp1:/home/user1:/bin/bash ``` -With the `-f` option, the change information of the file will always be output unless the user exits the monitoring state with CTRL + C. This option is very frequently used to track log files (the logs) in real time. +With the `-f` option, the change information of the file will always be output unless the user exits the monitoring state with ++control+c++. This option is very frequently used to track log files (the logs) in real time. Without the `-n` option, the `tail` command displays the last 10 lines of the file. @@ -1000,7 +1000,7 @@ adm:x:3:4:adm:/var/adm/:/sbin/nologin | `-o file` | Saves the sort to the specified file. | | `-t` | Specify a delimiter, which requires that the contents of the corresponding file must be regularly delimited column contents, otherwise they cannot be sorted properly. | | `-r` | Reverse the order of the result. Used in conjunction with the `-n` option to sort in order from largest to smallest. | -| `-u` | Remove duplicates after sorting. Equivalent to `sort file | uniq`. | +| `-u` | Remove duplicates after sorting. Equivalent to `sort file uniq`. | The `sort` command sorts the file only on the screen. The file is not modified by the sorting. To save the sort, use the `-o` option or an output redirection `>`. @@ -1394,7 +1394,7 @@ When both output streams are redirected, no information is displayed on the scre A **pipe** is a mechanism allowing you to link the standard output of a first command to the standard input of a second command. -This communication is uni directional and is done with the `|` symbol. The pipe symbol `|` is obtained by pressing the SHIFT + | simultaneously. +This communication is uni directional and is done with the `|` symbol. The pipe symbol `|` is obtained by pressing the ++shift+bar++ simultaneously. ![pipe](images/pipe.png) @@ -1596,7 +1596,7 @@ none on /proc/sys/fs/binfmt_misc type binfmt_misc (r The `;` character strings the commands. -The commands will all run sequentially in the order of input once the user presses ENTER. +The commands will all run sequentially in the order of input once the user presses ++enter++. ```bash ls /; cd /home; ls -lia; cd / diff --git a/docs/books/admin_guide/04-advanced-commands.md b/docs/books/admin_guide/04-advanced-commands.md index a215e29578..10fe4410b9 100644 --- a/docs/books/admin_guide/04-advanced-commands.md +++ b/docs/books/admin_guide/04-advanced-commands.md @@ -333,7 +333,7 @@ The `-n` option allows you to specify the number of seconds between each executi !!! Note - To exit the `watch` command, you must type the keys: CTRL+C to kill the process. + To exit the `watch` command, you must type the keys: ++control+c++ to kill the process. Examples: @@ -421,7 +421,7 @@ This command already saves time. Combine it with owner, owner group, and rights sudo install -v -o rocky -g users -m 644 -D -t ~/samples/ src/sample.txt ``` - !!! note +!!! note `sudo` is required in this case to make property changes. diff --git a/docs/books/admin_guide/05-vi.md b/docs/books/admin_guide/05-vi.md index 1fb58fd854..1f1a5033bc 100644 --- a/docs/books/admin_guide/05-vi.md +++ b/docs/books/admin_guide/05-vi.md @@ -74,15 +74,15 @@ At startup, VI is in *commands* mode. !!! Tip - A line of text is ended by pressing ENTER but if the screen is not wide enough, VI makes automatic line breaks, _wrap_ configuration by default. These line breaks may not be desired, this is the _nowrap_ configuration. + A line of text is ended by pressing ++enter++ but if the screen is not wide enough, VI makes automatic line breaks, *wrap* configuration by default. These line breaks may not be desired, this is the *nowrap* configuration. -To exit VI, from the Commands mode, press : then type: +To exit VI, from the Commands mode, press ++colon++ then type: * `q` to exit without saving (*quit*); * `w` to save your work (*write*); * `wq` (*write quit*) or `x` (*eXit*) to save and exit. -In command mode, Click the Z key of uppercase status twice in a row to save and exit. +In command mode, Click the ++z++ key of uppercase status twice in a row to save and exit. To force the exit without confirmation, you must add *!* to the previous commands. @@ -104,7 +104,7 @@ The third mode, *ex*, is a footer command mode from an old text editor. ### The Command Mode -This is the default mode when VI starts up. To access it from any of the other modes, simply press the ESC key. +This is the default mode when VI starts up. To access it from any of the other modes, simply press the ++escape++ key. At this time, all keyboard typing is interpreted as commands and the corresponding actions are executed. These are essentially commands for editing text (copy, paste, undo, ...). @@ -120,7 +120,7 @@ The text is not entered directly into the file but into a buffer zone in the mem This is the file modification mode. To access it, you must first switch to *command* mode, then enter the *ex* command frequently starting with the character `:`. -The command is validated by pressing the ENTER key. +The command is validated by pressing the ++enter++ key. ## Moving the cursor @@ -136,65 +136,65 @@ The cursor is placed under the desired character. * Move one or `n` characters to the left: -, n, h or nh +++arrow-left++, ++n++ ++arrow-left++, ++h++ or ++n++ ++h++ * Move one or `n` characters to the right: -, n, l or nl +++arrow-right++, ++n++ ++arrow-right++, ++l++ or ++n++ ++l++ * Move one or `n` characters up: -, n, k or nk +++arrow-up++, ++n++ ++arrow-up++, ++k++ or ++n++ ++k++ * Move one or `n` characters down: -, n, j or nj +++arrow-down++, ++n++ ++arrow-down++, ++j++ or ++n++ ++j++ * Move to the end of the line: -$ or END +++"$"++ or ++end++ * Move to the beginning of the line: -0 or POS1 +++0++ or ++"POS1"++ ### From the first character of a word Words are made up of letters or numbers. Punctuation characters and apostrophes separate words. -If the cursor is in the middle of a word w moves to the next word, b moves to the beginning of the word. +If the cursor is in the middle of a word ++w++ moves to the next word, ++b++ moves to the beginning of the word. If the line is finished, VI goes automatically to the next line. * Move one or `n` words to the right: -w or nw +++w++ or ++n++ ++w++ * Move one or `n` words to the left: -b or nb +++b++ or ++n++ ++b++ ### From any location on a line * Move to last line of text: -G +++g++ * Move to line `n`: -nG +++n++ ++g++ * Move to the first line of the screen: -H +++h++ * Move to the middle line of the screen: -M +++m++ * Move to the last line of the screen: -L +++l++ ## Inserting text @@ -204,37 +204,37 @@ VI switches to *insert* mode after entering one of these keys. !!! Note - VI switches to *insertion* mode. So you will have to press the ESC key to return to *command* mode. + VI switches to *insertion* mode. So you will have to press the ++escape++ key to return to *command* mode. ### In relation to a character * Inserting text before a character: -i (*insert*) +++"i"++ (*insert*) * Inserting text after a character: -a (*append*) +++"a"++ (*append*) ### In relation to a line * Inserting text at the beginning of a line: -I +++i++ * Inserting text at the end of a line: -A +++a++ ### In relation to the text * Inserting text before a line: -O +++o++ * Inserting text after a line: -o +++"o"++ ## Characters, words and lines @@ -258,41 +258,41 @@ These operations are done in *command* mode. * Delete one or `n` characters: -x or nx +++"x"++ or ++"n"++ ++"x"++ * Replace a character with another: -rcharacter +++"r"+"character"++ * Replace more than one character with others: -RcharactersESC +++r+"characters"+escape++ !!! Note - The R command switches to *replace* mode, which is a kind of *insert* mode. + The ++r++ command switches to *replace* mode, which is a kind of *insert* mode. ### Words * Delete (cut) one or `n` words: -dw or ndw +++"d"+"w"++ or ++"n"+"d"+"w"++ * Copy one or `n` words: -yw or nyw +++"y"+"w"++ or ++"n"+"y"+"w"++ * Paste a word once or `n` times after the cursor: -p or np +++p++ or ++"n"+"p"++ * Paste a word once or `n` times before the cursor: -P or nP +++p++ or ++"n"+p++ * Replace one word: -cw*word*ESC +++c+w+"word"+escape++ !!! Tip @@ -303,65 +303,65 @@ These operations are done in *command* mode. * Delete (cut) one or `n` lines: -dd or ndd +++"d"+"d"++ or ++"n"+"d"+"d"++ * Copy one or `n` lines: -yy or nyy +++"y"+"y"++ or ++"n"+"y"+"y"++ * Paste what has been copied or deleted once or `n` times after the current line: -p or np +++"p"++ or ++"n"+"p"++ * Paste what has been copied or deleted once or `n` times before the current line: -P or nP +++p++ or ++"n"+p++ * Delete (cut) from the beginning of the line to the cursor: -d0 +++"d"+0++ * Delete (cut) from the cursor to the end of the line: -d$ +++"d"+"$"++ * Copy from the beginning of the line to the cursor: -y0 +++"y"+0++ * Copy from the cursor to the end of the line: -y$ +++"y"+"$"++ * Delete (cut) the text from the current line: -dL or dG +++"d"+l++ or ++"d"+g++ * Copy the text from the current line: -yL or yG +++"y"+l++ or ++"y"+g++ ### Cancel an action * Undo the last action: -u +++u++ * Undo the actions on the current line: -U +++U ### Cancel cancellation * Cancel a cancellation -Ctrl+r +++control+r++ ## EX commands -The *Ex* mode allows you to act on the file (saving, layout, options, ...). It is also in *Ex* mode where search and replace commands are entered. The commands are displayed at the bottom of the page and must be validated with the ENTER key. +The *Ex* mode allows you to act on the file (saving, layout, options, ...). It is also in *Ex* mode where search and replace commands are entered. The commands are displayed at the bottom of the page and must be validated with the ++enter++ key. -To switch to *Ex* mode, from *command* mode, type :. +To switch to *Ex* mode, from *command* mode, type ++colon++. ### File line numbers @@ -383,11 +383,11 @@ To switch to *Ex* mode, from *command* mode, type :. * Find the next matching string: -n +++"n"++ * Find the previous matching string: -N +++n++ There are wildcards to facilitate the search in VI. diff --git a/docs/books/admin_guide/07-file-systems.md b/docs/books/admin_guide/07-file-systems.md index ed17f5bc2a..7f3b13adf9 100644 --- a/docs/books/admin_guide/07-file-systems.md +++ b/docs/books/admin_guide/07-file-systems.md @@ -37,12 +37,12 @@ and also discover: Partitioning will allow the installation of several operating systems because it is impossible for them to cohabit on the same logical drive. It also allows the separation of data logically (security, access optimization, etc.). -The partition table, stored in the first sector of the disk (MBR: _Master Boot Record_), records the division of the physical disk into partitioned volumes. +The partition table, stored in the first sector of the disk (MBR: *Master Boot Record*), records the division of the physical disk into partitioned volumes. For **MBR** partition table types, the same physical disk can be divided into a maximum of 4 partitions: -- _Primary partition_ (or main partition) -- _Extended partition_ +- *Primary partition* (or main partition) +- *Extended partition* !!! Warning @@ -74,7 +74,7 @@ In the world of GNU/Linux, everything is a file. For disks, they are recognized The Linux kernel contains drivers for most hardware devices. -What we call _devices_ are the files stored without `/dev`, identifying the different hardware detected by the motherboard. +What we call *devices* are the files stored without `/dev`, identifying the different hardware detected by the motherboard. The service called udev is responsible for applying the naming conventions (rules) and applying them to the devices it detects. @@ -94,7 +94,7 @@ There are at least two commands for partitioning a disk: `fdisk` and `cfdisk`. B The only reason to use `fdisk` is when you want to list all logical devices with the `-l` option. `fdisk` uses MBR partition tables, so it is not supported for **GPT** partition tables and cannot be processed for disks larger than **2TB**. -``` +```bash sudo fdisk -l sudo fdisk -l /dev/sdc sudo fdisk -l /dev/sdc2 @@ -102,11 +102,11 @@ sudo fdisk -l /dev/sdc2 ### `parted` command -The `parted` (_partition editor_) command can partition a disk without the drawbacks of `fdisk`. +The `parted` (*partition editor*) command can partition a disk without the drawbacks of `fdisk`. The `parted` command can be used on the command line or interactively. It also has a recovery function capable of rewriting a deleted partition table. -``` +```bash parted [-l] [device] ``` @@ -124,13 +124,13 @@ The `gparted` command, when run without any arguments, will show an interactive The `cfdisk` command is used to manage partitions. -``` +```bash cfdisk device ``` Example: -``` +```bash $ sudo cfdisk /dev/sda Disk: /dev/sda Size: 16 GiB, 17179869184 bytes, 33554432 sectors @@ -149,7 +149,7 @@ $ sudo cfdisk /dev/sda [ Write ] [ Dump ] ``` -The preparation, without _LVM_, of the physical media goes through five steps: +The preparation, without *LVM*, of the physical media goes through five steps: - Setting up the physical disk; - Partitioning of the volumes (a division of the disk, possibility of installing several systems, ...); @@ -159,15 +159,15 @@ The preparation, without _LVM_, of the physical media goes through five steps: ## Logical Volume Manager (LVM) -**L**ogical **V**olume **M**anager (_LVM_) +**L**ogical **V**olume **M**anager (*LVM*) The partition created by the **standard partition** cannot dynamically adjust the resources of the hard disk, once the partition is mounted, the capacity is completely fixed, this constraint is unacceptable on the server. Although the standard partition can be forcibly expanded or shrunk through certain technical means, it can easily cause data loss. LVM can solve this problem very well. LVM is available under Linux from kernel version 2.4, and its main features are: - More flexible disk capacity; - Online data movement; -- Disks in _stripe_ mode; +- Disks in *stripe* mode; - Mirrored volumes (recopy); -- Volume snapshots (_snapshot_). +- Volume snapshots (*snapshot*). The principle of LVM is very simple: @@ -193,7 +193,7 @@ The disadvantage is that if one of the physical volumes becomes out of order, th !!! note - LVM is only managed by the operating system. Therefore the _BIOS_ needs at least one partition without LVM to boot. + LVM is only managed by the operating system. Therefore the *BIOS* needs at least one partition without LVM to boot. !!! info @@ -204,7 +204,7 @@ The disadvantage is that if one of the physical volumes becomes out of order, th There are several storage mechanisms when storing data to **LV**, two of which are: - Linear volumes; -- Volumes in _stripe_ mode; +- Volumes in *stripe* mode; - Mirrored volumes. ![Linear volumes](images/07-file-systems-005.png) @@ -229,20 +229,20 @@ The main relevant commands are as follows: The `pvcreate` command is used to create physical volumes. It turns Linux partitions (or disks) into physical volumes. -``` +```bash pvcreate [-options] partition ``` Example: -``` +```bash [root]# pvcreate /dev/hdb1 pvcreate -- physical volume « /dev/hdb1 » successfully created ``` You can also use a whole disk (which facilitates disk size increases in virtual environments for example). -``` +```bash [root]# pvcreate /dev/hdb pvcreate -- physical volume « /dev/hdb » successfully created @@ -259,13 +259,13 @@ pvcreate -- physical volume « /dev/hdb » successfully created The `vgcreate` command creates volume groups. It groups one or more physical volumes into a volume group. -``` +```bash vgcreate [option] ``` Example: -``` +```bash [root]# vgcreate volume1 /dev/hdb1 … vgcreate – volume group « volume1 » successfully created and activated @@ -278,13 +278,13 @@ vgcreate – volume group « volume1 » successfully created and activated The `lvcreate` command creates logical volumes. The file system is then created on these logical volumes. -``` +```bash lvcreate -L size [-n name] VG_name ``` Example: -``` +```bash [root]# lvcreate –L 600M –n VolLog1 volume1 lvcreate -- logical volume « /dev/volume1/VolLog1 » successfully created ``` @@ -305,13 +305,13 @@ lvcreate -- logical volume « /dev/volume1/VolLog1 » successfully created The `pvdisplay` command allows you to view information about the physical volumes. -``` +```bash pvdisplay /dev/PV_name ``` Example: -``` +```bash [root]# pvdisplay /dev/PV_name ``` @@ -319,13 +319,13 @@ Example: The `vgdisplay` command allows you to view information about volume groups. -``` +```bash vgdisplay VG_name ``` Example: -``` +```bash [root]# vgdisplay volume1 ``` @@ -333,13 +333,13 @@ Example: The `lvdisplay` command allows you to view information about the logical volumes. -``` +```bash lvdisplay /dev/VG_name/LV_name ``` Example: -``` +```bash [root]# lvdisplay /dev/volume1/VolLog1 ``` @@ -358,7 +358,7 @@ The preparation with LVM of the physical support is broken down into the followi ## Structure of a file system -A _file system_ **FS** is in charge of the following actions: +A *file system* **FS** is in charge of the following actions: - Securing access and modification rights to files; - Manipulating files: create, read, modify, and delete; @@ -371,13 +371,13 @@ The Linux operating system is able to use different file systems (ext2, ext3, ex The `mkfs`(make file system) command allows you to create a Linux file system. -``` +```bash mkfs [-t fstype] filesys ``` Example: -``` +```bash [root]# mkfs -t ext4 /dev/sda1 ``` @@ -447,7 +447,7 @@ A file is managed by its inode number. The size of the inode table determines the maximum number of files the FS can contain. -Information present in the _inode table_ : +Information present in the *inode table* : - Inode number; - File type and access permissions; @@ -482,19 +482,19 @@ In case of errors, solutions are proposed to repair the inconsistencies. After r The `fsck` command is a console-mode integrity check and repair tool for Linux file systems. -``` +```bash fsck [-sACVRTNP] [ -t fstype ] filesys ``` Example: -``` +```bash [root]# fsck /dev/sda1 ``` To check the root partition, it is possible to create a `forcefsck` file and reboot or run `shutdown` with the `-F` option. -``` +```bash [root]# touch /forcefsck [root]# reboot or @@ -517,31 +517,31 @@ By definition, a File System is a tree structure of directories built from a roo Text document, directory, binary, partition, network resource, screen, keyboard, Unix kernel, user program, ... -Linux meets the **FHS** (_Filesystems Hierarchy Standard_) (see `man hier`), which defines the folders' names and roles. +Linux meets the **FHS** (*Filesystems Hierarchy Standard*) (see `man hier`), which defines the folders' names and roles. | Directory | Functionality | Complete word | | ---------- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------- | | `/` | Contains special directories | | | `/boot` | Files related to system startup | | -| `/sbin` | Commands necessary for system startup and repair | _system binaries_ | -| `/bin` | Executables of basic system commands | _binaries_ | +| `/sbin` | Commands necessary for system startup and repair | *system binaries* | +| `/bin` | Executables of basic system commands | *binaries* | | `/usr/bin` | System administration commands | | -| `/lib` | Shared libraries and kernel modules | _libraries_ | -| `/usr` | Saves data resources related to UNIX | _UNIX System Resources_ | -| `/mnt` | Temporary mount point directory | _mount_ | +| `/lib` | Shared libraries and kernel modules | *libraries* | +| `/usr` | Saves data resources related to UNIX | *UNIX System Resources* | +| `/mnt` | Temporary mount point directory | *mount* | | `/media` | For mounting removable media | | | `/misc` | To mount the shared directory of the NFS service. | | | `/root` | Administrator's login directory | | | `/home` | The upper-level directory of a common user's home directory | | -| `/tmp` | The directory containing temporary files | _temporary_ | -| `/dev` | Special device files | _device_ | -| `/etc` | Configuration and script files | _editable text configuration_ | -| `/opt` | Specific to installed applications | _optional_ | -| `/proc` | This is a mount point for the proc filesystem, which provides information about running processes and the kernel | _processes_ | -| `/var` | This directory contains files which may change in size, such as spool and log files | _variables_ | +| `/tmp` | The directory containing temporary files | *temporary* | +| `/dev` | Special device files | *device* | +| `/etc` | Configuration and script files | *editable text configuration* | +| `/opt` | Specific to installed applications | *optional* | +| `/proc` | This is a mount point for the proc filesystem, which provides information about running processes and the kernel | *processes* | +| `/var` | This directory contains files which may change in size, such as spool and log files | *variables* | | `/sys` | Virtual file system, similar to /proc | | | `/run` | That is /var/run | | -| `/srv` | Service Data Directory | _service_ | +| `/srv` | Service Data Directory | *service* | - To mount or unmount at the tree level, you must not be under its mount point. - Mounting on a non-empty directory does not delete the content. It is only hidden. @@ -556,7 +556,7 @@ The `/etc/fstab` file is read at system startup and contains the mounts to be pe Lines are read sequentially (`fsck`, `mount`, `umount`). -``` +```bash /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=46….92 /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 @@ -604,13 +604,13 @@ The `mount -a` command allows you to mount automatically based on the contents o The `mount` command allows you to mount and view the logical drives in the tree. -``` +```bash mount [-option] [device] [directory] ``` Example: -``` +```bash [root]# mount /dev/sda7 /home ``` @@ -631,13 +631,13 @@ Example: The `umount` command is used to unmount logical drives. -``` +```bash umount [-option] [device] [directory] ``` Example: -``` +```bash [root]# umount /home [root]# umount /dev/sda7 ``` @@ -664,7 +664,7 @@ As in any system, it is important to respect the file naming rules to navigate t Groups of words separated by spaces must be enclosed in quotation marks: -``` +```bash [root]# mkdir "working dir" ``` @@ -689,7 +689,7 @@ Examples of file extension agreements: ### Details of a file name -``` +```bash [root]# ls -liah /usr/bin/passwd 266037 -rwsr-xr-x 1 root root 59K mars 22 2019 /usr/bin/passwd 1 2 3 4 5 6 7 8 9 @@ -741,7 +741,7 @@ Shell > ls -ldi /tmp/t1 #### Special files -To communicate with peripherals (hard disks, printers, etc.), Linux uses interface files called special files (_device file_ or _special file_). These files allow the peripherals to identify themselves. +To communicate with peripherals (hard disks, printers, etc.), Linux uses interface files called special files (*device file* or *special file*). These files allow the peripherals to identify themselves. These files are special because they do not contain data but specify the access mode to communicate with the device. @@ -762,12 +762,12 @@ crw------- 1 root root 8, 0 jan 1 1970 /dev/tty0 #### Communication files -These are the pipe (_pipes_) and the _socket_ files. +These are the pipe (*pipes*) and the *socket* files. -- **Pipe files** pass information between processes by FIFO (_First In, First Out_). - One process writes transient information to a _pipe_ file, and another reads it. After reading, the information is no longer accessible. +- **Pipe files** pass information between processes by FIFO (*First In, First Out*). + One process writes transient information to a *pipe* file, and another reads it. After reading, the information is no longer accessible. -- **Socket files** allow bidirectional inter-process communication (on local or remote systems). They use an _inode_ of the file system. +- **Socket files** allow bidirectional inter-process communication (on local or remote systems). They use an *inode* of the file system. #### Link files @@ -783,7 +783,7 @@ Their main features are: | Link types | Description | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | soft link file | Represents a shortcut similar to Windows. It has permission of 777 and points to the original file. When the original file is deleted, the linked file and the original file are displayed in red. | -| Hard link file | Represents the original file. It has the same _ inode_ number as the hard-linked file. They can be updated synchronously, including the contents of the file and when it was modified. Cannot cross partitions, cannot cross file systems. Cannot be used for directories. | +| Hard link file | Represents the original file. It has the same *inode* number as the hard-linked file. They can be updated synchronously, including the contents of the file and when it was modified. Cannot cross partitions, cannot cross file systems. Cannot be used for directories. | Specific examples are as follows: @@ -825,7 +825,7 @@ Linux is a multi-user operating system where the control of access to files is e These controls are functions of: - file access permissions ; -- users (_ugo_ _Users Groups Others_). +- users (*ugo* *Users Groups Others*). ### Basic permissions of files and directories @@ -867,7 +867,7 @@ The description of **directory permissions** is as follows: The display of rights is done with the command `ls -l`. It is the last 9 characters of the block of 10. More precisely 3 times 3 characters. -``` +```bash [root]# ls -l /tmp/myfile -rwxrw-r-x 1 root sys ... /tmp/myfile 1 2 3 4 5 @@ -881,7 +881,7 @@ The display of rights is done with the command `ls -l`. It is the last 9 charact | 4 | File owner | | 5 | Group owner of the file | -By default, the _owner_ of a file is the one who created it. The _group_ of the file is the group of the owner who created the file. The _others_ are those not concerned by the previous cases. +By default, the *owner* of a file is the one who created it. The *group* of the file is the group of the owner who created the file. The *others* are those not concerned by the previous cases. The attributes are changed with the `chmod` command. @@ -891,7 +891,7 @@ Only the administrator and the owner of a file can change the rights of a file. The `chmod` command allows you to change the access permissions to a file. -``` +```bash chmod [option] mode file ``` @@ -903,9 +903,9 @@ chmod [option] mode file The rights of files and directories are not dissociated. For some operations, it will be necessary to know the rights of the directory containing the file. A write-protected file can be deleted by another user as long as the rights of the directory containing it allow this user to perform this operation. -The mode indication can be an octal representation (e.g. `744`) or a symbolic representation ([`ugoa`][`+=-`][`rwxst`]). +The mode indication can be an octal representation (e.g. `744`) or a symbolic representation ([`ugoa`] [`+=-`] [`rwxst`]). -##### Octal (or number)representation: +##### Octal (or number)representation | Number | Description | | :----: | ----------- | @@ -926,7 +926,7 @@ Add the three numbers together to get one user type permission. E.g. **755=rwxr- Sometimes you will see `chmod 4755`. The number 4 here refers to the special permission **set uid**. Special permissions will not be expanded here for the moment, just as a basic understanding. -``` +```bash [root]# ls -l /tmp/fil* -rwxrwx--- 1 root root … /tmp/file1 -rwx--x--- 1 root root … /tmp/file2 @@ -945,7 +945,7 @@ This method can be considered as a "literal" association between a user type, an ![Symbolic method](images/07-file-systems-014.png) -``` +```bash [root]# chmod -R u+rwx,g+wx,o-r /tmp/file1 [root]# chmod g=x,o-r /tmp/file2 [root]# chmod -R o=r /tmp/file3 @@ -955,8 +955,8 @@ This method can be considered as a "literal" association between a user type, an When a file or directory is created, it already has permissions. -- For a directory: `rwxr-xr-x` or _755_. -- For a file: `rw-r-r-` or _644_. +- For a directory: `rwxr-xr-x` or *755*. +- For a file: `rw-r-r-` or *644*. This behavior is defined by the **default mask**. @@ -985,13 +985,13 @@ For a file, the execution rights are removed: The `umask` command allows you to display and modify the mask. -``` +```bash umask [option] [mode] ``` Example: -``` +```bash $ umask 033 $ umask 0033 diff --git a/docs/books/admin_guide/08-process.md b/docs/books/admin_guide/08-process.md index 3b4ddd96ab..2a1a73e3e4 100644 --- a/docs/books/admin_guide/08-process.md +++ b/docs/books/admin_guide/08-process.md @@ -10,13 +10,13 @@ In this chapter, you will learn how to work with processes. **Objectives**: In this chapter, future Linux administrators will learn how to: -:heavy_check_mark: Recognize the `PID` and `PPID` of a process; -:heavy_check_mark: View and search for processes; +:heavy_check_mark: Recognize the `PID` and `PPID` of a process; +:heavy_check_mark: View and search for processes; :heavy_check_mark: Manage processes. :checkered_flag: **process**, **linux** -**Knowledge**: :star: :star: +**Knowledge**: :star: :star: **Complexity**: :star: **Reading time**: 20 minutes @@ -31,17 +31,17 @@ When a program runs, the system will create a process by placing the program dat Each process has: -* a _PID_: _**P**rocess **ID**entifier_, a unique process identifier -* a _PPID_: _**P**arent **P**rocess **ID**entifier_, unique identifier of parent process +* a *PID*: ***P**rocess **ID**entifier*, a unique process identifier +* a *PPID*: ***P**arent **P**rocess **ID**entifier*, unique identifier of parent process By successive filiations, the `init` process is the father of all processes. * A parent process always creates a process * A parent process can have multiple child processes -There is a parent/child relationship between processes. A child process results from the parent calling the _fork()_ primitive and duplicating its code to create a child. The _PID_ of the child is returned to the parent process so that it can talk to it. Each child has its parent's identifier, the _PPID_. +There is a parent/child relationship between processes. A child process results from the parent calling the *fork()* primitive and duplicating its code to create a child. The *PID* of the child is returned to the parent process so that it can talk to it. Each child has its parent's identifier, the *PPID*. -The _PID_ number represents the process at the time of execution. When the process finishes, the number is available again for another process. Running the same command several times will produce a different _PID_ each time. +The *PID* number represents the process at the time of execution. When the process finishes, the number is available again for another process. Running the same command several times will produce a different *PID* each time. @@ -52,12 +52,14 @@ The _PID_ number represents the process at the time of execution. When the proce ## Viewing processes The `ps` command displays the status of running processes. -``` + +```bash ps [-e] [-f] [-u login] ``` Example: -``` + +```bash # ps -fu root ``` @@ -84,7 +86,7 @@ Without an option specified, the `ps` command only displays processes running fr The result is displayed in the following columns: -``` +```bash # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Jan01 ? 00:00/03 /sbin/init @@ -103,7 +105,7 @@ root 1 0 0 Jan01 ? 00:00/03 /sbin/init The behavior of the control can be fully customized: -``` +```bash # ps -e --format "%P %p %c %n" --sort ppid --headers PPID PID COMMAND NI 0 1 systemd 0 @@ -123,14 +125,14 @@ The user process: * is started from a terminal associated with a user * accesses resources via requests or daemons -The system process (_daemon_): +The system process (*daemon*): * is started by the system * is not associated with any terminal and is owned by a system user (often `root`) * is loaded at boot time, resides in memory, and is waiting for a call * is usually identified by the letter `d` associated with the process name -System processes are therefore called daemons (_**D**isk **A**nd **E**xecution **MON**itor_). +System processes are therefore called daemons (***D**isk **A**nd **E**xecution **MON**itor*). ## Permissions and rights @@ -192,23 +194,23 @@ The constraints of the asynchronous mode: The `kill` command sends a stop signal to a process. -``` +```bash kill [-signal] PID ``` Example: -``` -$ kill -9 1664 +```bash +kill -9 1664 ``` | Code | Signal | Description | |------|-----------|--------------------------------------------------------| -| `2` | _SIGINT_ | Immediate termination of the process | -| `9` | _SIGKILL_ | Interrupt the process (CTRL + D) | -| `15` | _SIGTERM_ | Clean termination of the process | -| `18` | _SIGCONT_ | Resume the process | -| `19` | _SIGSTOP_ | Suspend the process | +| `2` | *SIGINT* | Immediate termination of the process | +| `9` | *SIGKILL* | Interrupt the process (++control+d++) | +| `15` | *SIGTERM* | Clean termination of the process | +| `18` | *SIGCONT* | Resume the process | +| `19` | *SIGSTOP* | Suspend the process | Signals are the means of communication between processes. The `kill` command sends a signal to a process. @@ -224,14 +226,14 @@ Signals are the means of communication between processes. The `kill` command sen `nohup` allows the launching of a process independently of a connection. -``` +```bash nohup command ``` Example: -``` -$ nohup myprogram.sh 0CTRL + Z keys simultaneously, the synchronous process is temporarily suspended. Access to the prompt is restored after displaying the number of the process that has just been suspended. +By pressing the ++control+z++ keys simultaneously, the synchronous process is temporarily suspended. Access to the prompt is restored after displaying the number of the process that has just been suspended. ### `&` instruction -The `&` statement executes the command asynchronously (the command is then called _job_) and displays the number of _job_. Access to the prompt is then returned. +The `&` statement executes the command asynchronously (the command is then called *job*) and displays the number of *job*. Access to the prompt is then returned. Example: -``` +```bash $ time ls -lR / > list.ls 2> /dev/null & [1] 15430 $ ``` -The _job_ number is obtained during background processing and is displayed in square brackets, followed by the `PID` number. +The *job* number is obtained during background processing and is displayed in square brackets, followed by the `PID` number. ### `fg` and `bg` commands The `fg` command puts the process in the foreground: -``` +```bash $ time ls -lR / > list.ls 2>/dev/null & $ fg 1 time ls -lR / > list.ls 2/dev/null @@ -270,7 +272,7 @@ time ls -lR / > list.ls 2/dev/null while the command `bg` places it in the background: -``` +```bash [CTRL]+[Z] ^Z [1]+ Stopped @@ -279,7 +281,7 @@ $ bg 1 $ ``` -Whether it was put in the background when it was created with the `&` argument or later with the CTRL +Z keys, a process can be brought back to the foreground with the `fg` command and its job number. +Whether it was put in the background when it was created with the `&` argument or later with the ++control+z++ keys, a process can be brought back to the foreground with the `fg` command and its job number. ### `jobs` command @@ -287,7 +289,7 @@ The `jobs` command displays the list of processes running in the background and Example: -``` +```bash $ jobs [1]- Running sleep 1000 [2]+ Running find / > arbo.txt @@ -296,24 +298,26 @@ $ jobs The columns represent: 1. job number -2. the order that the processes run -- a `+` : The process selected by default for the `fg` and `bg` commands when no job number is specified -- a `-` : This process is the next process to take the `+` -3. _Running_ (running process) or _Stopped_ (suspended process) +2. the order that the processes run: + + * a `+` : The process selected by default for the `fg` and `bg` commands when no job number is specified + * a `-` : This process is the next process to take the `+` + +3. *Running* (running process) or *Stopped* (suspended process) 4. the command ### `nice` and `renice` commands The command `nice` allows the execution of a command by specifying its priority. -``` +```bash nice priority command ``` Example: -``` -$ nice -n+15 find / -name "file" +```bash +nice -n+15 find / -name "file" ``` Unlike `root`, a standard user can only reduce the priority of a process. Only values between +0 and +19 will be accepted. @@ -324,15 +328,16 @@ Unlike `root`, a standard user can only reduce the priority of a process. Only v The `renice` command allows you to change the priority of a running process. -``` +```bash renice priority [-g GID] [-p PID] [-u UID] ``` Example: +```bash +renice +15 -p 1664 ``` -$ renice +15 -p 1664 -``` + | Option | Description | |--------|-----------------------------------| | `-g` | `GID` of the process owner group. | @@ -353,7 +358,7 @@ The `renice` command acts on processes already running. It is therefore possible The `top` command displays the processes and their resource consumption. -``` +```bash $ top PID USER PR NI ... %CPU %MEM TIME+ COMMAND 2514 root 20 0 15 5.5 0:01.14 top @@ -374,11 +379,11 @@ The `top` command allows control of the processes in real-time and in interactiv ### `pgrep` and `pkill` commands -The `pgrep` command searches the running processes for a process name and displays the _PID_ matching the selection criteria on the standard output. +The `pgrep` command searches the running processes for a process name and displays the *PID* matching the selection criteria on the standard output. -The `pkill` command will send each process the specified signal (by default _SIGTERM_). +The `pkill` command will send each process the specified signal (by default *SIGTERM*). -``` +```bash pgrep process pkill [option] [-signal] process ``` @@ -387,14 +392,14 @@ Examples: * Get the process number from `sshd`: - ``` - $ pgrep -u root sshd + ```bash + pgrep -u root sshd ``` * Kill all `tomcat` processes: - ``` - $ pkill tomcat + ```bash + pkill tomcat ``` !!! note @@ -403,13 +408,13 @@ Examples: In addition to sending signals to the relevant processes, the `pkill` command can also end the user's connection session according to the terminal number, such as: -``` -$ pkill -t pts/1 +```bash +pkill -t pts/1 ``` ### `killall` command -This command's function is roughly the same as that of the `pkill` command. The usage is —`killall [option] [ -s SIGNAL | -SIGNAL ] NAME`. The default signal is _SIGTERM_. +This command's function is roughly the same as that of the `pkill` command. The usage is —`killall [option] [ -s SIGNAL | -SIGNAL ] NAME`. The default signal is *SIGTERM*. | Options | Description | | :--- | :--- | @@ -419,8 +424,8 @@ This command's function is roughly the same as that of the `pkill` command. The Example: -``` -$ killall tomcat +```bash +killall tomcat ``` ### `pstree` command @@ -472,8 +477,8 @@ Hazard: How can we check for any zombie processes in the current system? -``` -$ ps -lef | awk '{print $2}' | grep Z +```bash +ps -lef | awk '{print $2}' | grep Z ``` These characters may appear in this column: diff --git a/docs/books/admin_guide/09-backups.md b/docs/books/admin_guide/09-backups.md index ef6c72fa8b..3b1d6602dc 100644 --- a/docs/books/admin_guide/09-backups.md +++ b/docs/books/admin_guide/09-backups.md @@ -10,14 +10,14 @@ In this chapter you will learn how to back up and restore your data with Linux. **Objectives**: In this chapter, future Linux administrators will learn how to: -:heavy_check_mark: use the `tar` and `cpio` command to make a backup; -:heavy_check_mark: check their backups and restore data; +:heavy_check_mark: use the `tar` and `cpio` command to make a backup; +:heavy_check_mark: check their backups and restore data; :heavy_check_mark: compress or decompress their backups. :checkered_flag: **backup**, **restore**, **compression** -**Knowledge**: :star: :star: :star: -**Complexity**: :star: :star: +**Knowledge**: :star: :star: :star: +**Complexity**: :star: :star: **Reading time**: 40 minutes @@ -104,13 +104,16 @@ There are many utilities to make backups. The commands we will use here are `tar` and `cpio`. * `tar`: - * easy to use; - * allows adding files to an existing backup. + + 1. easy to use; + 2. allows adding files to an existing backup. + * `cpio`: - * retains owners; - * retains groups, dates and rights; - * skips damaged files; - * entire file system. + + 1. retains owners; + 2. retains groups, dates and rights; + 3. skips damaged files; + 4. entire file system. !!! Note @@ -185,9 +188,9 @@ The default utility for creating backups on UNIX systems is the `tar` command. T #### Estimate the size of a backup -The following command estimates the size in kilobytes of a possible _tar_ file: +The following command estimates the size in kilobytes of a possible *tar* file: -``` +```bash $ tar cf - /directory/to/backup/ | wc -c 20480 $ tar czf - /directory/to/backup/ | wc -c @@ -208,10 +211,10 @@ Here is an example of a naming convention for a `tar` backup, knowing that the d |---------|---------|------------------|----------------------------------------------| | `cvf` | `home` | `home.tar` | `/home` in relative mode, uncompressed form | | `cvfP` | `/etc` | `etc.A.tar` | `/etc` in absolute mode, no compression | -| `cvfz` | `usr` | `usr.tar.gz` | `/usr` in relative mode, _gzip_ compression | -| `cvfj` | `usr` | `usr.tar.bz2` | `/usr` in relative mode, _bzip2_ compression | -| `cvfPz` | `/home` | `home.A.tar.gz` | `home` in absolute mode, _gzip_ compression | -| `cvfPj` | `/home` | `home.A.tar.bz2` | `home` in absolute mode, _bzip2_ compression | +| `cvfz` | `usr` | `usr.tar.gz` | `/usr` in relative mode, *gzip* compression | +| `cvfj` | `usr` | `usr.tar.bz2` | `/usr` in relative mode, *bzip2* compression | +| `cvfPz` | `/home` | `home.A.tar.gz` | `home` in absolute mode, *gzip* compression | +| `cvfPj` | `/home` | `home.A.tar.bz2` | `home` in absolute mode, *bzip2* compression | | … | | | | #### Create a backup @@ -220,17 +223,16 @@ Here is an example of a naming convention for a `tar` backup, knowing that the d Creating a non-compressed backup in relative mode is done with the `cvf` keys: -``` +```bash tar c[vf] [device] [file(s)] ``` Example: -``` +```bash [root]# tar cvf /backups/home.133.tar /home/ ``` - | Key | Description | |-----|--------------------------------------------------------| | `c` | Creates a backup. | @@ -245,20 +247,19 @@ Example: Creating a non-compressed backup explicitly in absolute mode is done with the `cvfP` keys: -``` -$ tar c[vf]P [device] [file(s)] +```bash +tar c[vf]P [device] [file(s)] ``` Example: -``` +```bash [root]# tar cvfP /backups/home.133.P.tar /home/ ``` | Key | Description | |-----|-----------------------------------| -| `P` | Creates a backup in absolute mode. | - +| `P` |Creates a backup in absolute mode. | !!! Warning @@ -268,14 +269,13 @@ Example: Creating a compressed backup with `gzip` is done with the `cvfz` keys: -``` -$ tar cvzf backup.tar.gz dirname/ +```bash +tar cvzf backup.tar.gz dirname/ ``` | Key | Description | |-----|----------------------------------| -| `z` | Compresses the backup in _gzip_. | - +| `z` |Compresses the backup in *gzip*. | !!! Note @@ -289,13 +289,13 @@ $ tar cvzf backup.tar.gz dirname/ Creating a compressed backup with `bzip` is done with the keys `cvfj`: -``` -$ tar cvfj backup.tar.bz2 dirname/ +```bash +tar cvfj backup.tar.bz2 dirname/ ``` | Key | Description | |-----|-----------------------------------| -| `j` | Compresses the backup in _bzip2_. | +| `j` |Compresses the backup in *bzip2*. | !!! Note @@ -307,36 +307,36 @@ Compression, and consequently decompression, will have an impact on resource con Here is a ranking of the compression of a set of text files, from least to most efficient: -- compress (`.tar.Z`) -- gzip (`.tar.gz`) -- bzip2 (`.tar.bz2`) -- lzip (`.tar.lz`) -- xz (`.tar.xz`) +* compress (`.tar.Z`) +* gzip (`.tar.gz`) +* bzip2 (`.tar.bz2`) +* lzip (`.tar.lz`) +* xz (`.tar.xz`) #### Add a file or directory to an existing backup It is possible to add one or more items to an existing backup. -``` +```bash tar {r|A}[key(s)] [device] [file(s)] ``` To add `/etc/passwd` to the backup `/backups/home.133.tar`: -``` +```bash [root]# tar rvf /backups/home.133.tar /etc/passwd ``` Adding a directory is similar. Here add `dirtoadd` to `backup_name.tar`: -``` -$ tar rvf backup_name.tar dirtoadd +```bash +tar rvf backup_name.tar dirtoadd ``` | Key | Description | |-----|----------------------------------------------------------------------------------| -| `r` | Adds one or more files at the end of a direct access media backup (hard disk). | -| `A` | Adds one or more files at the end of a backup on sequential access media (tape). | +| `r` |Adds one or more files at the end of a direct access media backup (hard disk). | +| `A` |Adds one or more files at the end of a backup on sequential access media (tape). | !!! Note @@ -358,26 +358,26 @@ $ tar rvf backup_name.tar dirtoadd Viewing the contents of a backup without extracting it is possible. -``` +```bash tar t[key(s)] [device] ``` -| Key | Description | +| Key |Description | |-----|-------------------------------------------------------| -| `t` | Displays the content of a backup (compressed or not). | +| `t` |Displays the content of a backup (compressed or not). | Examples: -``` -$ tar tvf backup.tar -$ tar tvfz backup.tar.gz -$ tar tvfj backup.tar.bz2 +```bash +tar tvf backup.tar +tar tvfz backup.tar.gz +tar tvfj backup.tar.bz2 ``` -When the number of files in a backup becomes large, it is possible to _pipe_ the result of the `tar` command to a _pager_ (`more`, `less`, `most`, etc.): +When the number of files in a backup becomes large, it is possible to *pipe* the result of the `tar` command to a *pager* (`more`, `less`, `most`, etc.): -``` -$ tar tvf backup.tar | less +```bash +tar tvf backup.tar | less ``` !!! Tip @@ -392,14 +392,14 @@ $ tar tvf backup.tar | less The integrity of a backup can be tested with the `W` key at the time of its creation: -``` -$ tar cvfW file_name.tar dir/ +```bash +tar cvfW file_name.tar dir/ ``` The integrity of a backup can be tested with the key `d` after its creation: -``` -$ tar vfd file_name.tar dir/ +```bash +tar vfd file_name.tar dir/ ``` !!! Tip @@ -419,7 +419,7 @@ $ tar vfd file_name.tar dir/ The `W` key is also used to compare the content of an archive against the filesystem: -``` +```bash $ tar tvfW file_name.tar Verify 1/file1 1/file1: Mod time differs @@ -428,33 +428,33 @@ Verify 1/file2 Verify 1/file3 ``` -The verification with the `W` key cannot be done with a compressed archive. The key `d` must be used: +The verification with the `W` key cannot be done with a compressed archive. The key ++d++ must be used: -``` -$ tar dfz file_name.tgz -$ tar dfj file_name.tar.bz2 +```bash +tar dfz file_name.tgz +tar dfj file_name.tar.bz2 ``` -#### Extract (_untar_) a backup +#### Extract (*untar*) a backup -Extract (_untar_) a ``*.tar`` backup is done with the `xvf` keys: +Extract (*untar*) a ``*.tar`` backup is done with the `xvf` keys: Extract the `etc/exports` file from the `/savings/etc.133.tar` backup into the `etc` directory of the active directory: -``` -$ tar xvf /backups/etc.133.tar etc/exports +```bash +tar xvf /backups/etc.133.tar etc/exports ``` Extract all files from the compressed backup `/backups/home.133.tar.bz2` into the active directory: -``` +```bash [root]# tar xvfj /backups/home.133.tar.bz2 ``` Extract all files from the backup `/backups/etc.133.P.tar` to their original directory: -``` -$ tar xvfP /backups/etc.133.P.tar +```bash +tar xvfP /backups/etc.133.P.tar ``` !!! Warning @@ -463,21 +463,20 @@ $ tar xvfP /backups/etc.133.P.tar Check the contents of the backup. -| Key | Description | +| Key |Description | |------|----------------------------------------------------| -| `x` | Extracts files from the backup, compressed or not. | - +| `x` |Extracts files from the backup, compressed or not. | -Extracting a _tar-gzipped_ (`*.tar.gz`) backup is done with the `xvfz` keys: +Extracting a *tar-gzipped* (`*.tar.gz`) backup is done with the `xvfz` keys: -``` -$ tar xvfz backup.tar.gz +```bash +tar xvfz backup.tar.gz ``` -Extracting a _tar-bzipped_ (`*.tar.bz2`) backup is done with the `xvfj` keys: +Extracting a *tar-bzipped* (`*.tar.bz2`) backup is done with the `xvfj` keys: -``` -$ tar xvfj backup.tar.bz2 +```bash +tar xvfj backup.tar.bz2 ``` !!! Tip @@ -488,52 +487,52 @@ $ tar xvfj backup.tar.bz2 To restore the files in their original directory (key `P` of a `tar xvf`), you must have generated the backup with the absolute path. That is, with the `P` key of a `tar cvf`. -##### Extract only a file from a _tar_ backup +##### Extract only a file from a *tar* backup -To extract a specific file from a _tar_ backup, specify the name of that file at the end of the `tar xvf` command. +To extract a specific file from a *tar* backup, specify the name of that file at the end of the `tar xvf` command. -``` -$ tar xvf backup.tar /path/to/file +```bash +tar xvf backup.tar /path/to/file ``` The previous command extracts only the `/path/to/file` file from the `backup.tar` backup. This file will be restored to the `/path/to/` directory created, or already present, in the active directory. -``` -$ tar xvfz backup.tar.gz /path/to/file -$ tar xvfj backup.tar.bz2 /path/to/file +```bash +tar xvfz backup.tar.gz /path/to/file +tar xvfj backup.tar.bz2 /path/to/file ``` -##### Extract a folder from a backup _tar_ +##### Extract a folder from a backup *tar* To extract only one directory (including its subdirectories and files) from a backup, specify the directory name at the end of the `tar xvf` command. -``` -$ tar xvf backup.tar /path/to/dir/ +```bash +tar xvf backup.tar /path/to/dir/ ``` To extract multiple directories, specify each of the names one after the other: -``` -$ tar xvf backup.tar /path/to/dir1/ /path/to/dir2/ -$ tar xvfz backup.tar.gz /path/to/dir1/ /path/to/dir2/ -$ tar xvfj backup.tar.bz2 /path/to/dir1/ /path/to/dir2/ +```bash +tar xvf backup.tar /path/to/dir1/ /path/to/dir2/ +tar xvfz backup.tar.gz /path/to/dir1/ /path/to/dir2/ +tar xvfj backup.tar.bz2 /path/to/dir1/ /path/to/dir2/ ``` -##### Extract a group of files from a _tar_ backup using regular expressions (_regex_) +##### Extract a group of files from a *tar* backup using regular expressions (*regex*) -Specify a regular expression (_regex_) to extract the files matching the specified selection pattern. +Specify a regular expression (*regex*) to extract the files matching the specified selection pattern. For example, to extract all files with the extension `.conf`: -``` -$ tar xvf backup.tar --wildcards '*.conf' +```bash +tar xvf backup.tar --wildcards '*.conf' ``` keys: - * **--wildcards *.conf** corresponds to files with the extension `.conf`. +* **--wildcards *.conf** corresponds to files with the extension `.conf`. -## _CoPy Input Output_ - `cpio` +## *CoPy Input Output* - `cpio` The `cpio` command allows saving on several successive media without specifying any options. @@ -560,7 +559,7 @@ This list is provided with the commands `find`, `ls` or `cat`. Syntax of the `cpio` command: -``` +```bash [files command |] cpio {-o| --create} [-options] [device] ``` @@ -568,32 +567,32 @@ Example: With a redirection of the output of `cpio`: -``` -$ find /etc | cpio -ov > /backups/etc.cpio +```bash +find /etc | cpio -ov > /backups/etc.cpio ``` Using the name of a backup media: -``` -$ find /etc | cpio -ovF /backups/etc.cpio +```bash +find /etc | cpio -ovF /backups/etc.cpio ``` -The result of the `find` command is sent as input to the `cpio` command via a _pipe_ (character `|`, AltGr + 6). +The result of the `find` command is sent as input to the `cpio` command via a *pipe* (character `|`, ++alt-graph+6++). Here, the `find /etc` command returns a list of files corresponding to the contents of the `/etc` directory (recursively) to the `cpio` command, which performs the backup. Do not forget the `>` sign when saving or the `F save_name_cpio`. -| Options | Description | +| Options |Description | |---------|------------------------------------------------| -| `-o` | Creates a backup (_output_). | -| `-v` | Displays the name of the processed files. | -| `-F` | Designates the backup to be modified (medium). | +| `-o` |Creates a backup (*output*). | +| `-v` |Displays the name of the processed files. | +| `-F` |Designates the backup to be modified (medium). | Backup to a media: -``` -$ find /etc | cpio -ov > /dev/rmt0 +```bash +find /etc | cpio -ov > /dev/rmt0 ``` The media can be of several types: @@ -605,15 +604,15 @@ The media can be of several types: #### Backup with relative path -``` -$ cd / -$ find etc | cpio -o > /backups/etc.cpio +```bash +cd / +find etc | cpio -o > /backups/etc.cpio ``` #### Backup with absolute path -``` -$ find /etc | cpio -o > /backups/etc.A.cpio +```bash +find /etc | cpio -o > /backups/etc.A.cpio ``` !!! Warning @@ -624,14 +623,14 @@ $ find /etc | cpio -o > /backups/etc.A.cpio ### Add to a backup -``` +```bash [files command |] cpio {-o| --create} -A [-options] [device} ``` Example: -``` -$ find /etc/shadow | cpio -o -AF SystemFiles.A.cpio +```bash +find /etc/shadow | cpio -o -AF SystemFiles.A.cpio ``` Adding files is only possible on direct access media. @@ -645,7 +644,7 @@ Adding files is only possible on direct access media. * Save **then** compress -``` +```bash $ find /etc | cpio –o > etc.A.cpio $ gzip /backups/etc.A.cpio $ ls /backups/etc.A.cpio* @@ -654,8 +653,8 @@ $ ls /backups/etc.A.cpio* * Save **and** compress -``` -$ find /etc | cpio –o | gzip > /backups/etc.A.cpio.gz +```bash +find /etc | cpio –o | gzip > /backups/etc.A.cpio.gz ``` There is no option, unlike the `tar` command, to save and compress at the same time. @@ -667,19 +666,19 @@ For the first method, the backup file is automatically renamed by the `gzip` uti ### Read the contents of a backup -Syntax of the `cpio` command to read the contents of a _cpio_ backup: +Syntax of the `cpio` command to read the contents of a *cpio* backup: -``` +```bash cpio -t [-options] [ tmp cpio –iuE tmp -F etc.A.cpio rm -f tmp @@ -777,13 +776,13 @@ The `gzip` command compresses data. Syntax of the `gzip` command: -``` +```bash gzip [options] [file ...] ``` Example: -``` +```bash $ gzip usr.tar $ ls usr.tar.gz @@ -799,13 +798,13 @@ The `bunzip2` command also compresses data. Syntax of the `bzip2` command: -``` +```bash bzip2 [options] [file ...] ``` Example: -``` +```bash $ bzip2 usr.cpio $ ls usr.cpio.bz2 @@ -821,13 +820,13 @@ The `gunzip` command decompresses compressed data. Syntax of the `gunzip` command: -``` +```bash gunzip [options] [file ...] ``` Example: -``` +```bash $ gunzip usr.tar.gz $ ls usr.tar @@ -847,13 +846,13 @@ The `bunzip2` command decompresses compressed data. Syntax of the `bzip2` command: -``` +```bash bzip2 [options] [file ...] ``` Example: -``` +```bash $ bunzip2 usr.cpio.bz2 $ ls usr.cpio diff --git a/docs/books/admin_guide/10-boot.md b/docs/books/admin_guide/10-boot.md index 23843b37c1..27e98c7421 100644 --- a/docs/books/admin_guide/10-boot.md +++ b/docs/books/admin_guide/10-boot.md @@ -9,16 +9,16 @@ In this chapter, you will learn how the system starts. **** **Objectives**: In this chapter, future Linux administrators will learn: -:heavy_check_mark: The different stages of the booting process; -:heavy_check_mark: How Rocky Linux supports this boot by using GRUB2 and systemd; -:heavy_check_mark: How to protect GRUB2 from an attack; -:heavy_check_mark: How to manage the services; +:heavy_check_mark: The different stages of the booting process; +:heavy_check_mark: How Rocky Linux supports this boot by using GRUB2 and systemd; +:heavy_check_mark: How to protect GRUB2 from an attack; +:heavy_check_mark: How to manage the services; :heavy_check_mark: How to access logs from `journald`. :checkered_flag: **users** -**Knowledge**: :star: :star: -**Complexity**: :star: :star: :star: +**Knowledge**: :star: :star: +**Complexity**: :star: :star: :star: **Reading time**: 20 minutes **** @@ -49,7 +49,7 @@ The GRUB 2 configuration file is located under `/boot/grub2/grub.cfg` but this f The GRUB2 menu configuration settings are located under `/etc/default/grub` and are used to generate the `grub.cfg` file. -``` +```bash # cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DEFAULT=saved @@ -61,7 +61,7 @@ GRUB_DISABLE_RECOVERY="true" If changes are made to one or more of these parameters, the `grub2-mkconfig` command must be run to regenerate the `/boot/grub2/grub.cfg` file. -``` +```bash [root] # grub2-mkconfig –o /boot/grub2/grub.cfg ``` @@ -71,7 +71,8 @@ If changes are made to one or more of these parameters, the `grub2-mkconfig` com ### The kernel The kernel starts the `systemd` process with PID 1. -``` + +```bash root 1 0 0 02:10 ? 00:00:02 /usr/lib/systemd/systemd --switched-root --system --deserialize 23 ``` @@ -104,7 +105,7 @@ To password protect the GRUB2 bootloader: * If a user has not yet been configured, use the `grub2-setpassword` command to provide a password for the root user: -``` +```bash # grub2-setpassword ``` @@ -114,14 +115,14 @@ A `/boot/grub2/user.cfg` file will be created if it was not already present. It This command only supports configurations with a single root user. -``` +```bash [root]# cat /boot/grub2/user.cfg GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CC6F56....A21 ``` * Recreate the configuration file with the `grub2-mkconfig` command: -``` +```bash [root]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64 @@ -185,27 +186,27 @@ Service units end with the `.service` file extension and have a similar purpose | systemctl | Description | |-------------------------------------------|-----------------------------------------| -| systemctl start _name_.service | Starts a service | -| systemctl stop _name_.service | Stops a service | -| systemctl restart _name_.service | Restarts a service | -| systemctl reload _name_.service | Reloads a configuration | -| systemctl status _name_.service | Checks if a service is running | -| systemctl try-restart _name_.service | Restarts a service only if it is running | +| systemctl start *name*.service | Starts a service | +| systemctl stop *name*.service | Stops a service | +| systemctl restart *name*.service | Restarts a service | +| systemctl reload *name*.service | Reloads a configuration | +| systemctl status *name*.service | Checks if a service is running | +| systemctl try-restart *name*.service | Restarts a service only if it is running | | systemctl list-units --type service --all | Displays the status of all services | The `systemctl` command is also used for the `enable` or `disable` of system a service and displaying associated services: | systemctl | Description | |------------------------------------------|---------------------------------------------------------| -| systemctl enable _name_.service | Activates a service | -| systemctl disable _name_.service | Disables a service | +| systemctl enable *name*.service | Activates a service | +| systemctl disable *name*.service | Disables a service | | systemctl list-unit-files --type service | Lists all services and checks if they are running | | systemctl list-dependencies --after | Lists the services that start before the specified unit | | systemctl list-dependencies --before | Lists the services that start after the specified unit | Examples: -``` +```bash systemctl stop nfs-server.service # or systemctl stop nfs-server @@ -213,24 +214,24 @@ systemctl stop nfs-server To list all units currently loaded: -``` +```bash systemctl list-units --type service ``` To list all units to check if they are activated: -``` +```bash systemctl list-unit-files --type service ``` -``` +```bash systemctl enable httpd.service systemctl disable bluetooth.service ``` ### Example of a .service file for the postfix service -``` +```bash postfix.service Unit File What follows is the content of the /usr/lib/systemd/system/postfix.service unit file as currently provided by the postfix package: @@ -275,20 +276,20 @@ Similarly, the `multi-user.target` unit starts other essential system services, To determine which target is used by default: -``` +```bash systemctl get-default ``` This command searches for the target of the symbolic link located at `/etc/systemd/system/default.target` and displays the result. -``` +```bash $ systemctl get-default graphical.target ``` The `systemctl` command can also provide a list of available targets: -``` +```bash systemctl list-units --type target UNIT LOAD ACTIVE SUB DESCRIPTION basic.target loaded active active Basic System @@ -314,13 +315,13 @@ timers.target loaded active active Timers To configure the system to use a different default target: -``` +```bash systemctl set-default name.target ``` Example: -``` +```bash # systemctl set-default multi-user.target rm '/etc/systemd/system/default.target' ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.target' @@ -328,7 +329,7 @@ ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.t To switch to a different target unit in the current session: -``` +```bash systemctl isolate name.target ``` @@ -340,7 +341,7 @@ On Rocky 8, the `rescue mode` is equivalent to the old `single user mode` and re To change the current target and enter `rescue mode` in the current session: -``` +```bash systemctl rescue ``` @@ -348,7 +349,7 @@ systemctl rescue To change the current target and enter emergency mode in the current session: -``` +```bash systemctl emergency ``` @@ -377,7 +378,7 @@ The format of the native log file, which is a structured and indexed binary file The `journalctl` command displays the log files. -``` +```bash journalctl ``` @@ -392,7 +393,7 @@ The command lists all log files generated on the system. The structure of this o With continuous display, log messages are displayed in real time. -``` +```bash journalctl -f ``` @@ -402,7 +403,7 @@ This command returns a list of the ten most recent log lines. The journalctl uti It is possible to use different filtering methods to extract information that fits different needs. Log messages are often used to track erroneous behavior on the system. To view entries with a selected or higher priority: -``` +```bash journalctl -p priority ``` diff --git a/docs/books/admin_guide/11-tasks.md b/docs/books/admin_guide/11-tasks.md index a418c32728..38225e4abb 100644 --- a/docs/books/admin_guide/11-tasks.md +++ b/docs/books/admin_guide/11-tasks.md @@ -10,8 +10,8 @@ In this chapter you will learn how to manage scheduled tasks. **Objectives**: In this chapter, future Linux administrators will learn how to: -:heavy_check_mark: Linux deals with the tasks scheduling; -:heavy_check_mark: restrict the use of **`cron`** to certain users; +:heavy_check_mark: Linux deals with the tasks scheduling; +:heavy_check_mark: restrict the use of **`cron`** to certain users; :heavy_check_mark: schedule tasks. :checkered_flag: **crontab**, **crond**, **scheduling**, **linux** @@ -48,7 +48,7 @@ The `cron` service is run by a `crond` daemon present in memory. To check its status: -``` +```bash [root] # systemctl status crond ``` @@ -58,13 +58,13 @@ To check its status: Initialization of the `crond` daemon in manual: -``` +```bash [root]# systemctl {status|start|restart|stop} crond ``` Initialization of the `crond` daemon at startup: -``` +```bash [root]# systemctl enable crond ``` @@ -105,15 +105,16 @@ By default, `/etc/cron.deny` exists and is empty and `/etc/cron.allow` does not Only **user1** will be able to use `cron`. -``` +```bash [root]# vi /etc/cron.allow user1 ``` ### Prohibit a user + Only **user2** will not be able to use `cron`. -``` +```bash [root]# vi /etc/cron.deny user2 ``` @@ -132,17 +133,17 @@ This file contains all the information the `crond` needs to know regarding all t The `crontab` command is used to manage the schedule file. -``` +```bash crontab [-u user] [-e | -l | -r] ``` Example: -``` +```bash [root]# crontab -u user1 -e ``` -| Option | Description | +| Option |Description | |--------|-----------------------------------------------------------| | `-e` | Edits the schedule file with vi | | `-l` | Displays the contents of the schedule file | @@ -184,7 +185,7 @@ The `crontab` file is structured according to the following rules. * Each line ends with a carriage return; * A `#` at the beginning of the line comments it. -``` +```bash [root]# crontab –e 10 4 1 * * /root/scripts/backup.sh 1 2 3 4 5 6 @@ -216,25 +217,25 @@ Examples: Script executed on April 15 at 10:25 am: -``` +```bash 25 10 15 04 * /root/scripts/script > /log/… ``` Run at 11am and then at 4pm every day: -``` +```bash 00 11,16 * * * /root/scripts/script > /log/… ``` Run every hour from 11am to 4pm every day: -``` +```bash 00 11-16 * * * /root/scripts/script > /log/… ``` Run every 10 minutes during working hours: -``` +```bash */10 8-17 * * 1-5 /root/scripts/script > /log/… ``` @@ -253,12 +254,12 @@ For the root user, `crontab` also has some special time settings: A user, rockstar, wants to edit his `crontab` file: -1) `crond` checks to see if he is allowed (`/etc/cron.allow` and `/etc/cron.deny`). +1. `crond` checks to see if he is allowed (`/etc/cron.allow` and `/etc/cron.deny`). -2) If he is, he accesses his `crontab` file (`/var/spool/cron/rockstar`). +2. If he is, he accesses his `crontab` file (`/var/spool/cron/rockstar`). -Every minute `crond` reads the schedule files. + Every minute `crond` reads the schedule files. -3) It executes the scheduled tasks. +3. It executes the scheduled tasks. -4) It reports systematically in a log file (`/var/log/cron`). +4. It reports systematically in a log file (`/var/log/cron`). diff --git a/docs/books/admin_guide/12-network.md b/docs/books/admin_guide/12-network.md index 861c0e7662..5ebb3d9d41 100644 --- a/docs/books/admin_guide/12-network.md +++ b/docs/books/admin_guide/12-network.md @@ -11,9 +11,9 @@ In this chapter you will learn how to work with and manage the network. **Objectives**: In this chapter you will learn how to: :heavy_check_mark: Configure a workstation to use DHCP; -:heavy_check_mark: Configure a workstation to use a static configuration; -:heavy_check_mark: Configure a workstation to use a gateway; -:heavy_check_mark: Configure a workstation to use DNS servers; +:heavy_check_mark: Configure a workstation to use a static configuration; +:heavy_check_mark: Configure a workstation to use a gateway; +:heavy_check_mark: Configure a workstation to use DNS servers; :heavy_check_mark: Troubleshoot the network of a workstation. :checkered_flag: **network**, **linux**, **ip** @@ -45,9 +45,9 @@ The minimum parameters to be defined for the machine are: Example: -* `pc-rocky`; -* `192.168.1.10`; -* `255.255.255.0`. +* `pc-rocky`; +* `192.168.1.10`; +* `255.255.255.0`. The notation called CIDR is more and more frequent: 192.168.1.10/24 @@ -103,7 +103,7 @@ In order for a computer to be part of a DNS domain, it must be given a DNS suffi !!! Note "Memory aid" - To remember the order of the layers of the OSI model, remember the following sentence: __Please Do Not Touch Steven's Pet Alligator__. + To remember the order of the layers of the OSI model, remember the following sentence: **Please Do Not Touch Steven's Pet Alligator**. | Layer | Protocoles | |-------------------|----------------------------------------------| @@ -170,7 +170,7 @@ Forget the old `ifconfig` command! Think `ip`! The `hostname` command displays or sets the host name of the system -``` +```bash hostname [-f] [hostname] ``` @@ -187,7 +187,7 @@ To assign a host name, it is possible to use the `hostname` command, but the cha To set the host name, the file `/etc/sysconfig/network` must be modified: -``` +```bash NETWORKING=yes HOSTNAME=pc-rocky.mondomaine.lan ``` @@ -208,13 +208,13 @@ It is therefore essential to fill in these two files before any configuration of The `/etc/hosts` file is a static host name mapping table, which follows the following format: -``` +```bash @IP [alias] [# comment] ``` Example of `/etc/hosts` file: -``` +```bash 127.0.0.1 localhost localhost.localdomain ::1 localhost localhost.localdomain 192.168.1.10 rockstar.rockylinux.lan rockstar @@ -236,7 +236,7 @@ The **NSS** (**N**ame **S**ervice **S**witch) allows configuration files (e.g., The `/etc/nsswitch.conf` file is used to configure the name service databases. -``` +```bash passwd: files shadow: files group: files @@ -254,7 +254,7 @@ The resolution of the name service can be tested with the `getent` command that The `/etc/resolv.conf` file contains the DNS name resolution configuration. -``` +```bash #Generated by NetworkManager domain mondomaine.lan search mondomaine.lan @@ -275,25 +275,25 @@ The `ip` command from the `iproute2` package allows you to configure an interfac Display interfaces: -``` +```bash [root]# ip link ``` Display interfaces information: -``` +```bash [root]# ip addr show ``` Display the information of an interface: -``` +```bash [root]# ip addr show eth0 ``` Display the ARP table: -``` +```bash [root]# ip neigh ``` @@ -307,34 +307,34 @@ The configuration of interfaces under Rocky Linux is done in the `/etc/sysconfig For each Ethernet interface, a `ifcfg-ethX` file allows for the configuration of the associated interface. -``` +```bash DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp HWADDR=00:0c:29:96:32:e3 ``` -* Interface name: (must be in the file name) +* Interface name: (must be in the file name) -``` +```bash DEVICE=eth0 ``` * Automatically start the interface: -``` +```bash ONBOOT=yes ``` * Make a DHCP request when the interface starts up: -``` +```bash BOOTPROTO=dhcp ``` * Specify the MAC address (optional but useful when there are several interfaces): -``` +```bash HWADDR=00:0c:29:96:32:e3 ``` @@ -344,7 +344,7 @@ HWADDR=00:0c:29:96:32:e3 * Restart the network service: -``` +```bash [root]# systemctl restart NetworkManager ``` @@ -352,7 +352,7 @@ HWADDR=00:0c:29:96:32:e3 The static configuration requires at least: -``` +```bash DEVICE=eth0 ONBOOT=yes BOOTPROTO=none @@ -362,25 +362,25 @@ NETMASK=255.255.255.0 * Here we are replacing "dhcp" with "none" which equals static configuration: -``` +```bash BOOTPROTO=none ``` * IP Address: -``` +```bash IPADDR=192.168.1.10 ``` * Subnet mask: -``` +```bash NETMASK=255.255.255.0 ``` * The mask can be specified with a prefix: -``` +```bash PREFIX=24 ``` @@ -392,7 +392,7 @@ PREFIX=24 ![Network architecture with a gateway](images/network-002.png) -``` +```bash DEVICE=eth0 ONBOOT=yes BOOTPROTO=none @@ -404,7 +404,7 @@ GATEWAY=192.168.1.254 The `ip route` command: -``` +```bash [root]# ip route show 192.168.1.0/24 dev eth0 […] src 192.168.1.10 metric 1 default via 192.168.1.254 dev eth0 proto static @@ -422,23 +422,23 @@ A system needs to resolve: * FQDNs into IP addresses -``` +```bash www.free.fr = 212.27.48.10 ``` * IP addresses into names -``` +```bash 212.27.48.10 = www.free.fr ``` * or to obtain information about an area: -``` +```bash MX de free.fr = 10 mx1.free.fr + 20 mx2.free.fr ``` -``` +```bash DEVICE=eth0 ONBOOT=yes BOOTPROTO=none @@ -453,7 +453,7 @@ DOMAIN=rockylinux.lan In this case, to reach the DNS, you have to go through the gateway. -``` +```bash #Generated by NetworkManager domain mondomaine.lan search mondomaine.lan @@ -471,7 +471,7 @@ It is the basic command for testing the network because it checks the connectivi Syntax of the `ping` command: -``` +```bash ping [-c numerical] destination ``` @@ -479,7 +479,7 @@ The `-c` (count) option allows you to stop the command after the countdown in se Example: -``` +```bash [root]# ping –c 4 localhost ``` @@ -487,41 +487,41 @@ Example: Validate connectivity from near to far -1) Validate the TCP/IP software layer +1. Validate the TCP/IP software layer -``` -[root]# ping localhost -``` + ```bash + [root]# ping localhost + ``` -"Pinging" the inner loop does not detect a hardware failure on the network interface. It simply determines whether the IP software configuration is correct. + "Pinging" the inner loop does not detect a hardware failure on the network interface. It simply determines whether the IP software configuration is correct. -2) Validate the network card +2. Validate the network card -``` -[root]# ping 192.168.1.10 -``` + ```bash + [root]# ping 192.168.1.10 + ``` -To determine that the network card is functional, we must now ping its IP address. The network card, if the network cable is not connected, should be in a "down" state. + To determine that the network card is functional, we must now ping its IP address. The network card, if the network cable is not connected, should be in a "down" state. -If the ping does not work, first check the network cable to your network switch and reassemble the interface (see the `if up` command), then check the interface itself. + If the ping does not work, first check the network cable to your network switch and reassemble the interface (see the `if up` command), then check the interface itself. -3) Validate the connectivity of the gateway +3. Validate the connectivity of the gateway -``` -[root]# ping 192.168.1.254 -``` + ```bash + [root]# ping 192.168.1.254 + ``` -4) Validate the connectivity of a remote server +4. Validate the connectivity of a remote server -``` -[root]# ping 172.16.1.2 -``` + ```bash + [root]# ping 172.16.1.2 + ``` -5) Validate the DNS service +5. Validate the DNS service -``` -[root]# ping www.free.fr -``` + ```bash + [root]# ping www.free.fr + ``` ### `dig` command @@ -529,13 +529,13 @@ The `dig` command is used to query the DNS server. The `dig` command syntax: -``` +```bash dig [-t type] [+short] [name] ``` Examples: -``` +```bash [root]# dig +short rockylinux.org 76.223.126.88 [root]# dig -t MX +short rockylinux.org  ✔ @@ -553,14 +553,13 @@ The `getent` (get entry) command is used to get an NSSwitch entry (`hosts` + `dn Syntax of the `getent` command: - -``` +```bash getent hosts name ``` Example: -``` +```bash [root]# getent hosts rockylinux.org 76.223.126.88 rockylinux.org ``` @@ -575,13 +574,13 @@ The `ipcalc` (**ip calculation**) command is used to calculate the address of a Syntax of the `ipcalc` command: -``` +```bash ipcalc [options] IP ``` Example: -``` +```bash [root]# ipcalc –b 172.16.66.203 255.255.240.0 BROADCAST=172.16.79.255 ``` @@ -616,13 +615,13 @@ The `ss` (**socket statistics**) command displays the listening ports on the net Syntax of the `ss` command: -``` +```bash ss [-tuna] ``` Example: -``` +```bash [root]# ss –tuna tcp LISTEN 0 128 *:22 *:* ``` @@ -641,13 +640,13 @@ The `netstat` command (**network statistics**) displays the listening ports on t Syntax of the `netstat` command: -``` +```bash netstat -tapn ``` Example: -``` +```bash [root]# netstat –tapn tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2161/sshd ``` @@ -658,13 +657,13 @@ A misconfiguration can cause multiple interfaces to use the same IP address. Thi When the network is malfunctioning, and when an IP address conflict could be the cause, it is possible to use the `arp-scan` software (requires the EPEL repository): -``` -$ dnf install arp-scan +```bash +dnf install arp-scan ``` Example: -``` +```bash $ arp-scan -I eth0 -l 172.16.1.104 00:01:02:03:04:05 3COM CORPORATION @@ -686,39 +685,39 @@ $ arp-scan -I eth0 -l The `ip` command can hot add an IP address to an interface -``` +```bash ip addr add @IP dev DEVICE ``` Example: -``` +```bash [root]# ip addr add 192.168.2.10 dev eth1 ``` The `ip` command allows for the activation or deactivation of an interface: -``` +```bash ip link set DEVICE up ip link set DEVICE down ``` Example: -``` +```bash [root]# ip link set eth1 up [root]# ip link set eth1 down ``` The `ip` command is used to add a route: -``` +```bash ip route add [default|netaddr] via @IP [dev device] ``` Example: -``` +```bash [root]# ip route add default via 192.168.1.254 [root]# ip route add 192.168.100.0/24 via 192.168.2.254 dev eth1 ``` @@ -731,7 +730,7 @@ The files used in this chapter are: A complete interface configuration could be this (file `/etc/sysconfig/network-scripts/ifcfg-eth0`): -``` +```bash DEVICE=eth0 ONBOOT=yes BOOTPROTO=none diff --git a/docs/books/admin_guide/13-softwares.md b/docs/books/admin_guide/13-softwares.md index 43a2b7c51c..2e3a58fec7 100644 --- a/docs/books/admin_guide/13-softwares.md +++ b/docs/books/admin_guide/13-softwares.md @@ -166,7 +166,6 @@ Only the short name of the package is required. | `info` | Displays the package information. | | `autoremove` | Removes all packages installed as dependencies but no longer needed. | - The `dnf install` command allows you to install the desired package without worrying about its dependencies, which will be resolved directly by `dnf` itself. ```bash @@ -230,7 +229,6 @@ nginx-mod-mail.aarch64 : Nginx mail modules nginx-mod-stream.aarch64 : Nginx stream modules ``` - The `dnf remove` command removes a package from the system and its dependencies. Below is an excerpt of the **dnf remove httpd** command. ```bash @@ -258,13 +256,13 @@ Removing unused dependencies: The `dnf list` command lists all the packages installed on the system and present in the repository. It accepts several parameters: -| Parameter | Description | -|-------------|----------------------------------------------------------------------------| -| `all` | Lists the installed packages and then those available on the repositories. | -| `available` | Lists only the packages available for installation. | -| `updates` | Lists packages that can be upgraded. | -| `obsoletes` | Lists the packages made obsolete by higher versions available. | -| `recent` | Lists the latest packages added to the repository. | +| Parameter |Description | +|-------------|---------------------------------------------------------------------------| +| `all` |Lists the installed packages and then those available on the repositories. | +| `available` |Lists only the packages available for installation. | +| `updates` |Lists packages that can be upgraded. | +| `obsoletes` |Lists the packages made obsolete by higher versions available. | +| `recent` |Lists the latest packages added to the repository. | The `dnf info` command, as you might expect, provides detailed information about a package: @@ -494,7 +492,6 @@ The `dnf clean` command cleans all caches and temporary files created by `dnf`. | `metadata` | Removes all the repositories metadata. | | `packages` | Removes any cached packages. | - ### How DNF works The DNF manager relies on one or more configuration files to target the repositories containing the RPM packages. @@ -511,7 +508,7 @@ Each `.repo` file consists of at least the following information, one directive Example: -``` +```bash [baseos] # Short name of the repository name=Rocky Linux $releasever - BaseOS # Short name of the repository #Detailed name mirrorlist=http://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=BaseOS-$releasever # http address of a list or mirror @@ -543,19 +540,19 @@ Modules come from the AppStream repository and contain both streams and profiles You can obtain a list of all modules by executing the following command: -``` +```bash dnf module list ``` This will give you a long list of the available modules and the profiles that can be used for them. The thing is you probably already know what package you are interested in, so to find out if there are modules for a particular package, add the package name after "list". We will use our `postgresql` package example again here: -``` +```bash dnf module list postgresql ``` This will give you output that looks like this: -``` +```bash Rocky Linux 8 - AppStream Name Stream Profiles Summary postgresql 9.6 client, server [d] PostgreSQL server and client module @@ -570,7 +567,7 @@ Notice in the listing the "[d]". This means that this is the default. It shows t Using our example `postgresql` package, let's say that we want to enable version 12. To do this, you simply use the following: -``` +```bash dnf module enable postgresql:12 ``` @@ -578,7 +575,7 @@ Here the enable command requires the module name followed by a ":" and the strea To verify that you have enabled `postgresql` module stream version 12, use your list command again which should show you the following output: -``` +```bash Rocky Linux 8 - AppStream Name Stream Profiles Summary postgresql 9.6 client, server [d] PostgreSQL server and client module @@ -593,13 +590,13 @@ Here we can see the "[e]" for "enabled" next to stream 12, so we know that versi Now that our module stream is enabled, the next step is to install `postgresql`, the client application for the postgresql server. This can be achieved by running the following command: -``` +```bash dnf install postgresql ``` Which should give you this output: -``` +```bash ======================================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================================== @@ -622,13 +619,13 @@ After approving by typing "y" you installed the application. It's also possible to directly install packages without even having to enable the module stream! In this example, let's assume that we only want the client profile applied to our installation. To do this, we simply enter this command: -``` +```bash dnf install postgresql:12/client ``` Which should give you this output: -``` +```bash ======================================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================================== @@ -656,7 +653,7 @@ Answering "y" to the prompt will install everything you need to use postgresql v After you install, you may decide that for whatever reason, you need a different version of the stream. The first step is to remove your packages. Using our example `postgresql` package again, we would do this with: -``` +```bash dnf remove postgresql ``` @@ -664,13 +661,13 @@ This will display similar output as the install procedure above, except it will Once this step is complete, you can issue the reset command for the module using: -``` +```bash dnf module reset postgresql ``` Which will give you output like this: -``` +```bash Dependencies resolved. ======================================================================================================================================== Package Architecture Version Repository Size @@ -688,7 +685,7 @@ Is this ok [y/N]: Answering "y" to the prompt will then reset `postgresql` back to the default stream with the stream that we had enabled (12 in our example) no longer enabled: -``` +```bash Rocky Linux 8 - AppStream Name Stream Profiles Summary postgresql 9.6 client, server [d] PostgreSQL server and client module @@ -701,7 +698,7 @@ Now you can use the default. You can also use the switch-to sub-command to switch from one enabled stream to another. Using this method not only switches to the new stream, but installs the needed packages (either downgrade or upgrade) without a separate step. To use this method to enable `postgresql` stream version 13 and use the "client" profile, you would use: -``` +```bash dnf module switch-to postgresql:13/client ``` @@ -711,13 +708,13 @@ There may be times when you wish to disable the ability to install packages from To disable the module streams for `postgresql` simply do: -``` +```bash dnf module disable postgresql ``` And if you list out the `postgresql` modules again, you will see the following showing all `postgresql` module versions disabled: -``` +```bash Rocky Linux 8 - AppStream Name Stream Profiles Summary postgresql 9.6 [x] client, server [d] PostgreSQL server and client module @@ -799,7 +796,7 @@ epel-modular Extra Packages for Enterprise Linux Modular 8 - aarch64 The repository configuration files are located in `/etc/yum.repos.d/`. -``` +```bash ll /etc/yum.repos.d/ | grep epel -rw-r--r--. 1 root root 1485 Jan 31 17:19 epel-modular.repo -rw-r--r--. 1 root root 1422 Jan 31 17:19 epel.repo @@ -911,7 +908,7 @@ The `dnf-plugins-core` package adds plugins to `dnf` that will be useful for man Install the package on your system: -``` +```bash dnf install dnf-plugins-core ``` @@ -925,26 +922,26 @@ Examples: * Download a `.repo` file and use it: -``` +```bash dnf config-manager --add-repo https://packages.centreon.com/ui/native/rpm-standard/23.04/el8/centreon-23.04.repo ``` * You can also set an url as a base url for a repo: -``` +```bash dnf config-manager --add-repo https://repo.rocky.lan/repo ``` * Enable or disable one or more repos: -``` +```bash dnf config-manager --set-enabled epel centreon dnf config-manager --set-disabled epel centreon ``` * Add a proxy to your config file: -``` +```bash dnf config-manager --save --setopt=*.proxy=http://proxy.rocky.lan:3128/ ``` @@ -954,7 +951,7 @@ dnf config-manager --save --setopt=*.proxy=http://proxy.rocky.lan:3128/ * Activate a copr repo: -``` +```bash copr enable xxxx ``` @@ -962,19 +959,19 @@ copr enable xxxx Download rpm package instead of installing it: -``` +```bash dnf download ansible ``` If you just want to obtain the remote location url of the package: -``` +```bash dnf download --url ansible ``` Or if you want to also download the dependencies: -``` +```bash dnf download --resolv --alldeps ansible ``` @@ -984,7 +981,7 @@ After running a `dnf update`, the running processes will continue to run but wit The `needs-restarting` plugin will allow you to detect processes that are in this case. -``` +```bash dnf needs-restarting [-u] [-r] [-s] ``` @@ -997,11 +994,11 @@ dnf needs-restarting [-u] [-r] [-s] ### `versionlock` plugin -Sometimes it is useful to protect packages from all updates or to exclude certain versions of a package (because of known problems for example). For this purpose, the versionlock plugin will be of great help. +Sometimes it is useful to protect packages from all updates or to exclude certain versions of a package (because of known problems for example). For this purpose, the versionlock plugin will be of great help. You need to install an extra package: -``` +```bash dnf install python3-dnf-plugin-versionlock ``` @@ -1009,14 +1006,14 @@ Examples: * Lock the ansible version: -``` +```bash dnf versionlock add ansible Adding versionlock on: ansible-0:6.3.0-2.el9.* ``` * List locked packages: -``` +```bash dnf versionlock list ansible-0:6.3.0-2.el9.* ``` diff --git a/docs/books/admin_guide/14-special-authority.md b/docs/books/admin_guide/14-special-authority.md index b58611274b..66e7f36fb1 100644 --- a/docs/books/admin_guide/14-special-authority.md +++ b/docs/books/admin_guide/14-special-authority.md @@ -41,11 +41,11 @@ Their meanings are as follows: |:-----------:|--------------------------------------------------------------------------------------------------------------------------------------------| | **-** | Represents an ordinary file. Including plain text files (ASCII); binary files (binary); data format files (data); various compressed files. | | **d** | Represents a directory file. By default, there is one in every directory `.` and `..`. | -| **b** | Block device file. Including all kinds of hard drives, USB drives and so on. | +| **b** | Block device file. Including all kinds of hard drives, USB drives and so on. | | **c** | Character device file. Interface device of serial port, such as mouse, keyboard, etc. | -| **s** | Socket file. It is a file specially used for network communication. | +| **s** | Socket file. It is a file specially used for network communication. | | **p** | Pipe file. It is a special file type, the main purpose is to solve the errors caused by multiple programs accessing a file at the same time. FIFO is the abbreviation of first-in-first-out. | -| **l** | Soft link files, also called symbolic link files, are similar to shortcuts in Windows. Hard link file, also known as physical link file.| +| **l** | Soft link files, also called symbolic link files, are similar to shortcuts in Windows. Hard link file, also known as physical link file.| ## The meaning of basic permissions @@ -76,7 +76,7 @@ In GNU/Linux, in addition to the basic permissions mentioned above, there are al ### ACL permissions What is ACL? -ACL(Access Control List), the purpose is to solve the problem that the three identities under Linux can not meet the needs of resource permission allocation. +ACL(Access Control List), the purpose is to solve the problem that the three identities under Linux can not meet the needs of resource permission allocation. For example, the teacher gives lessons to the students, and the teacher creates a directory under the root directory of OS. Only the students in this class are allowed to upload and download, and others are not allowed. At this point, the permissions for the directory are 770. One day, a student from another school came to listen to the teacher, how should permissions be assigned? If you put this student in the **owner group**, he will have the same permissions as the students in this class - **rwx**. If the student is put into the **other users**, he will not have any permissions. At this time, the basic permission allocation cannot meet the requirements, and you need to use ACL. @@ -305,7 +305,7 @@ The role of "SetUID": * Only executable binaries can set SUID permissions. * The executor of the command should have x permission to the program. -* The executor of the command obtains the identity of the owner of the program file when executing the program. +* The executor of the command obtains the identity of the owner of the program file when executing the program. * The identity change is only valid during execution, and once the binary program is finished, the executor's identity is restored to the original identity. Why does GNU/Linux need such strange permissions? @@ -368,12 +368,12 @@ The role of "SetGID": * Only executable binaries can set SGID permissions. * The executor of the command should have x permission to the program. -* The executor of the command obtains the identity of the owner group of the program file when executing the program. +* The executor of the command obtains the identity of the owner group of the program file when executing the program. * The identity change is only valid during execution, and once the binary program is finished, the executor's identity is restored to the original identity. Take the `locate` command for example: -``` +```bash Shell > rpm -ql mlocate /usr/bin/locate ... @@ -417,7 +417,7 @@ Shell > chmod g-s FILE_NAME -rwxr-S--x 1 root root 0 Jan 14 12:11 sgid ``` -SGID can be used not only for executable binary file/program, but also for directories, but it is rarely used. +SGID can be used not only for executable binary file/program, but also for directories, but it is rarely used. * Ordinary users must have rwx permissions on the directory. * For files created by ordinary users in this directory, the default owner group is the owner group of the directory. @@ -489,15 +489,15 @@ Usage of the `chattr` command -- `chattr [ -RVf ] [ -v version ] [ -p project ] The format of a symbolic mode is +-=[aAcCdDeFijPsStTu]. -* "+" means to increase permissions; -* "-" means to reduce permissions; +* "+" means to increase permissions; +* "-" means to reduce permissions; * "=" means equal to a permission. The most commonly used permissions (also called attribute) are **a** and **i**. -#### Description of attribute i: +#### Description of attribute i -| | Delete | Free modification | Append file content | View | Create file | +| | Delete | Free modification | Append file content | View | Create file | |:----------:|:------:|:-----------------:|:-------------------:|:----:|:-----------:| | file | × | × | × | √ | - | | directory | x
(Directory and files under the directory) | √
(Files in the directory) | √
(Files in the directory) | √
(Files in the directory) | x | @@ -558,9 +558,9 @@ Remove the i attribute from the above example: Shell > chattr -i /tmp/filei /tmp/diri ``` -#### Description of attribute a: +#### Description of attribute a -| | Delete | Free modification | Append file content | View | Create file | +| | Delete | Free modification | Append file content | View | Create file | |:----------:|:------:|:-----------------:|:-------------------:|:----:|:-----------:| | file | × | × | √ | √ | - | | directory | x
(Directory and files under the directory) | x
(Files in the directory) | √
(Files in the directory) | √
(Files in the directory) | √ | diff --git a/docs/books/disa_stig/disa_stig_part1.md b/docs/books/disa_stig/disa_stig_part1.md index bb0d05ac02..242ba7aa57 100644 --- a/docs/books/disa_stig/disa_stig_part1.md +++ b/docs/books/disa_stig/disa_stig_part1.md @@ -92,7 +92,7 @@ DISA STIG partitioning scheme for a 30G disk. My use case is as a simple web ser ![Accept Changes](images/disa_stig_pt1_img9.jpg) -### Step 5: Configure software for your environment: Server install without a GUI. +### Step 5: Configure software for your environment: Server install without a GUI This will matter in **Step 6**, so if you are using a UI or a workstation configuration the security profile will be different. @@ -132,7 +132,7 @@ In later tutorials we can get into joining this to a FreeIPA enterprise configur ![Reboot](images/disa_stig_pt1_img18.jpg) -### Step 11: Log in to your STIG'd Rocky Linux 8 System! +### Step 11: Log in to your STIG'd Rocky Linux 8 System ![DoD Warning](images/disa_stig_pt1_img19.jpg) diff --git a/docs/books/disa_stig/disa_stig_part2.md b/docs/books/disa_stig/disa_stig_part2.md index 78b2503557..6773270cdc 100644 --- a/docs/books/disa_stig/disa_stig_part2.md +++ b/docs/books/disa_stig/disa_stig_part2.md @@ -22,7 +22,7 @@ Over time, these things could change and you will want to keep an eye on it. Fre To list the security profiles available, we need to use the command `oscap info` provided by the `openscap-scanner` package. This should already be installed in your system if you've been following along since Part 1. To obtain the security profiles available: -``` +```bash oscap info /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml ``` @@ -48,11 +48,11 @@ DISA is just one of many Security Profiles supported by the Rocky Linux SCAP def There are two types to choose from here: * stig - Without a GUI -* stig_gui - With a GUI +* stig_gui - With a GUI Run a scan and create an HTML report for the DISA STIG: -``` +```bash sudo oscap xccdf eval --report unit-test-disa-scan.html --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml ``` @@ -69,15 +69,18 @@ And will output an HTML report: Next, we will generate a scan, and then use the results of the scan to generate a bash script to remediate the system based on the DISA stig profile. I do not recommend using automatic remediation, you should always review the changes before actually running them. 1) Generate a scan on the system: - ``` + + ```bash sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml ``` + 2) Use this scan output to generate the script: - ``` - sudo oscap xccdf generate fix --output draft-disa-remediate.sh --profile stig disa-stig-scan.xml + + ```bash + sudo oscap xccdf generate fix --output draft-disa-remediate.sh --profile stig disa-stig-scan.xml ``` -The resulting script will include all the changes it would make the system. +The resulting script will include all the changes it would make the system. !!! warning @@ -90,12 +93,15 @@ The resulting script will include all the changes it would make the system. You can also generate remediation actions in ansible playbook format. Let's repeat the section above, but this time with ansible output: 1) Generate a scan on the system: + + ```bash + sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml ``` - sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml - ``` + 2) Use this scan output to generate the script: - ``` - sudo oscap xccdf generate fix --fix-type ansible --output draft-disa-remediate.yml --profile stig disa-stig-scan.xml + + ```bash + sudo oscap xccdf generate fix --fix-type ansible --output draft-disa-remediate.yml --profile stig disa-stig-scan.xml ``` !!! warning @@ -109,4 +115,3 @@ You can also generate remediation actions in ansible playbook format. Let's repe Scott Shinn is the CTO for Atomicorp, and part of the Rocky Linux Security team. He has been involved with federal information systems at the White House, Department of Defense, and Intelligence Community since 1995. Part of that was creating STIG’s and the requirement that you use them and I am so very sorry about that. - diff --git a/docs/books/disa_stig/disa_stig_part3.md b/docs/books/disa_stig/disa_stig_part3.md index 46e117f73b..a1eefa17eb 100644 --- a/docs/books/disa_stig/disa_stig_part3.md +++ b/docs/books/disa_stig/disa_stig_part3.md @@ -10,9 +10,9 @@ tags: - enterprise --- -# Introduction +# Introduction -In part 1 of this series we covered how to build our web server with the base RHEL8 DISA STIG applied, and in part 2 we learned how to test the STIG compliance with the OpenSCAP tool. Now we’re going to actually do something with the system, and build a simple web application and apply the DISA web server STIG: https://www.stigviewer.com/stig/web_server/ +In part 1 of this series we covered how to build our web server with the base RHEL8 DISA STIG applied, and in part 2 we learned how to test the STIG compliance with the OpenSCAP tool. Now we’re going to actually do something with the system, and build a simple web application and apply the DISA web server STIG: First lets compare what we’re getting into here, the RHEL 8 DISA STIG is targeted at a very specific platform so the controls are pretty easy to understand in that context, test, and apply. Application STIGs have to be portable across multiple platforms, so the content here is generic in order to work on different linux distributions (RHEL, Ubuntu, SuSE, etc)**. This means that tools like OpenSCAP won’t help us audit/remediate the configuration, we’re going to have to do this manually. Those STIGs are: @@ -27,43 +27,43 @@ Before you start, you'll need to refer back to Part 1 and apply the DISA STIG Se 1.) Install `apache` and `mod_ssl` -``` - dnf install httpd mod_ssl +```bash +dnf install httpd mod_ssl ``` 2.) Configuration changes -``` - sed -i 's/^\([^#].*\)**/# \1/g' /etc/httpd/conf.d/welcome.conf - dnf -y remove httpd-manual - dnf -y install mod_session - - echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf - echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf - echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-apache-stig.conf - - sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf - sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf - sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf - sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf - sed -i "s/proxy_ftp_module/#proxy_ftp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf - sed -i "s/proxy_http_module/#proxy_http_module/g" /etc/httpd/conf.modules.d/00-proxy.conf - sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +```bash +sed -i 's/^\([^#].*\)**/# \1/g' /etc/httpd/conf.d/welcome.conf +dnf -y remove httpd-manual +dnf -y install mod_session + +echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf +echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf +echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-apache-stig.conf + +sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf +sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +sed -i "s/proxy_ftp_module/#proxy_ftp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +sed -i "s/proxy_http_module/#proxy_http_module/g" /etc/httpd/conf.modules.d/00-proxy.conf +sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules.d/00-proxy.conf ``` 3.) Update Firewall policy and start `httpd` -``` - firewall-cmd --zone=public --add-service=https --permanent - firewall-cmd --zone=public --add-service=https - firewall-cmd --reload - systemctl enable httpd - systemctl start httpd +```bash +firewall-cmd --zone=public --add-service=https --permanent +firewall-cmd --zone=public --add-service=https +firewall-cmd --reload +systemctl enable httpd +systemctl start httpd ``` ## Detail Controls Overview @@ -78,7 +78,7 @@ If you’ve gotten this far, you’re probably interested in knowing more about ### Types -* Technical - 24 controls +* Technical - 24 controls * Operational - 23 controls We’re not going to cover the “why” for these changes in this article, just what needs to happen if it is a technical control. If there is nothing we can change like in the case of an Operational control, the **Fix:** field will be none. The good news in a lot of these cases, this is already the default in Rocky Linux 8, so you don’t need to change anything at all. @@ -95,9 +95,9 @@ We’re not going to cover the “why” for these changes in this article, just **Severity:** Cat I High **Type:** Technical -**Fix:** +**Fix:** -``` +```bash sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf ``` @@ -119,133 +119,129 @@ sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf **Type:** Technical **Fix:** None, Fixed by default in Rocky Linux 8 -**(V-214245)** The Apache web server must have Web Distributed Authoring (WebDAV) disabled. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**(V-214245)** The Apache web server must have Web Distributed Authoring (WebDAV) disabled. +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf ``` **(V-214264)** The Apache web server must be configured to integrate with an organization's security infrastructure. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, forward web server logs to SIEM - +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, forward web server logs to SIEM **(V-214243)** The Apache web server must have resource mappings set to disable the serving of certain file types. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 - +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 **(V-214240)** The Apache web server must only contain services and functions necessary for operation. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** - +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash dnf remove httpd-manual ``` **(V-214238)** Expansion modules must be fully reviewed, tested, and signed before they can exist on a production Apache web server. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, disable all modules not required for the application - +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, disable all modules not required for the application **(V-214268)** Cookies exchanged between the Apache web server and the client, such as session cookies, must have cookie properties set to prohibit client-side scripts from reading the cookie data. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` -dnf install mod_session +```bash +dnf install mod_session echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214269)** The Apache web server must remove all export ciphers to protect the confidentiality and integrity of transmitted information. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 DISA STIG security Profile +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 DISA STIG security Profile -**(V-214260)** The Apache web server must be configured to immediately disconnect or disable remote access to the hosted applications. +**(V-214260)** The Apache web server must be configured to immediately disconnect or disable remote access to the hosted applications. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, this is a procedure to stop the web server +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, this is a procedure to stop the web server **(V-214249)** The Apache web server must separate the hosted applications from hosted Apache web server management functionality. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, this is related to the web applications rather than the server +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, this is related to the web applications rather than the server **(V-214246)** The Apache web server must be configured to use a specified IP address and port. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, the web server should be configured to only listen on a specific IP / port +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, the web server should be configured to only listen on a specific IP / port **(V-214247)** Apache web server accounts accessing the directory tree, the shell, or other operating system functions and utilities must only be administrative accounts. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, all files, and directories served by the web server need to be owned by administrative users, and not the web server user. - -**(V-214244)** The Apache web server must allow the mappings to unused and vulnerable scripts to be removed. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, all files, and directories served by the web server need to be owned by administrative users, and not the web server user. + +**(V-214244)** The Apache web server must allow the mappings to unused and vulnerable scripts to be removed. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, any cgi-bin or other Script/ScriptAlias mappings that are not used must be removed +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, any cgi-bin or other Script/ScriptAlias mappings that are not used must be removed **(V-214263)** The Apache web server must not impede the ability to write specified log record content to an audit log server. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Work with the SIEM administrator to allow the ability to write specified log record content to an audit log server. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Work with the SIEM administrator to allow the ability to write specified log record content to an audit log server. **(V-214228)** The Apache web server must limit the number of allowed simultaneous session requests. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214229)** The Apache web server must perform server-side session management. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf ``` -**(V-214266)** The Apache web server must prohibit or restrict the use of nonsecure or unnecessary ports, protocols, modules, and/or services. +**(V-214266)** The Apache web server must prohibit or restrict the use of nonsecure or unnecessary ports, protocols, modules, and/or services. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Ensure the website enforces the use of IANA well-known ports for HTTP and HTTPS. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Ensure the website enforces the use of IANA well-known ports for HTTP and HTTPS. **(V-214241)** The Apache web server must not be a proxy server. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf @@ -256,191 +252,190 @@ sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules. **(V-214265)** The Apache web server must generate log records that can be mapped to Coordinated Universal Time (UTC)** or Greenwich Mean Time (GMT) which are stamped at a minimum granularity of one second. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 **(V-214256)** Warning and error messages displayed to clients must be modified to minimize the identity of the Apache web server, patches, loaded modules, and directory paths. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** Use the "ErrorDocument" directive to enable custom error pages for 4xx or 5xx HTTP status codes. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** Use the "ErrorDocument" directive to enable custom error pages for 4xx or 5xx HTTP status codes. **(V-214237)** The log data and records from the Apache web server must be backed up onto a different system or media. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, document the web server backup procedures +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, document the web server backup procedures **(V-214236)** The log information from the Apache web server must be protected from unauthorized modification or deletion. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, document the web server backup procedures +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, document the web server backup procedures -**(V-214261)** Non-privileged accounts on the hosting system must only access Apache web server security-relevant information and functions through a distinct administrative account. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Restrict access to the web administration tool to only the System Administrator, Web Manager, or the Web Manager designees. +**(V-214261)** Non-privileged accounts on the hosting system must only access Apache web server security-relevant information and functions through a distinct administrative account. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Restrict access to the web administration tool to only the System Administrator, Web Manager, or the Web Manager designees. **(V-214235)** The Apache web server log files must only be accessible by privileged users. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, To protect the integrity of the data that is being captured in the log files, ensure that only the members of the Auditors group, Administrators, and the user assigned to run the web server software is granted permissions to read the log files. - +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, To protect the integrity of the data that is being captured in the log files, ensure that only the members of the Auditors group, Administrators, and the user assigned to run the web server software is granted permissions to read the log files. + **(V-214234)** The Apache web server must use a logging mechanism that is configured to alert the Information System Security Officer (ISSO) and System Administrator (SA) in the event of a processing failure. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Work with the SIEM administrator to configure an alert when no audit data is received from Apache based on the defined schedule of connections. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Work with the SIEM administrator to configure an alert when no audit data is received from Apache based on the defined schedule of connections. -**(V-214233)** An Apache web server, behind a load balancer or proxy server, must produce log records containing the client IP information as the source and destination and not the load balancer or proxy IP information with each event. +**(V-214233)** An Apache web server, behind a load balancer or proxy server, must produce log records containing the client IP information as the source and destination and not the load balancer or proxy IP information with each event. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Access the proxy server through which inbound web traffic is passed and configure settings to pass web traffic to the Apache web server transparently. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Access the proxy server through which inbound web traffic is passed and configure settings to pass web traffic to the Apache web server transparently. -Refer to https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html for additional information on logging options based on your proxy/load balancing setup. +Refer to for additional information on logging options based on your proxy/load balancing setup. **(V-214231)** The Apache web server must have system logging enabled. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 **(V-214232)** The Apache web server must generate, at a minimum, log records for system startup and shutdown, system access, and system authentication events. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 -V-214251 Cookies exchanged between the Apache web server and client, such as session cookies, must have security settings that disallow cookie access outside the originating Apache web server and hosted application. +V-214251 Cookies exchanged between the Apache web server and client, such as session cookies, must have security settings that disallow cookie access outside the originating Apache web server and hosted application. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214250)** The Apache web server must invalidate session identifiers upon hosted application user logout or other session termination. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214252)** The Apache web server must generate a session ID long enough that it cannot be guessed through brute force. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214255)** The Apache web server must be tuned to handle the operational requirements of the hosted application. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214254)** The Apache web server must be built to fail to a known safe state if system initialization fails, shutdown fails, or aborts fail. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Prepare documentation for disaster recovery methods for the Apache 2.4 web server in the event of the necessity for rollback. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Prepare documentation for disaster recovery methods for the Apache 2.4 web server in the event of the necessity for rollback. -**(V-214257)** Debugging and trace information used to diagnose the Apache web server must be disabled. +**(V-214257)** Debugging and trace information used to diagnose the Apache web server must be disabled. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf ``` **(V-214230)** The Apache web server must use cryptography to protect the integrity of remote sessions. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash sed -i "s/^#SSLProtocol.*/SSLProtocol -ALL +TLSv1.2/g" /etc/httpd/conf.d/ssl.conf ``` -**(V-214258)** The Apache web server must set an inactive timeout for sessions. +**(V-214258)** The Apache web server must set an inactive timeout for sessions. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** -``` +```bash echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-stig-apache.conf ``` **(V-214270)** The Apache web server must install security-relevant software updates within the configured time period directed by an authoritative source (e.g., IAVM, CTOs, DTMs, and STIGs). -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Install the current version of the web server software and maintain appropriate service packs and patches. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Install the current version of the web server software and maintain appropriate service packs and patches. -**(V-214239)** The Apache web server must not perform user management for hosted applications. +**(V-214239)** The Apache web server must not perform user management for hosted applications. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by default in Rocky Linux 8 +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by default in Rocky Linux 8 **(V-214274)** The Apache web server htpasswd files (if present) must reflect proper ownership and permissions. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Ensure the SA or Web Manager account owns the "htpasswd" file. Ensure permissions are set to "550". +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Ensure the SA or Web Manager account owns the "htpasswd" file. Ensure permissions are set to "550". **(V-214259)** The Apache web server must restrict inbound connections from nonsecure zones. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** None, Configure the "http.conf" file to include restrictions. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** None, Configure the "http.conf" file to include restrictions. Example: -``` +```bash Require not ip 192.168.205 Require not host phishers.example.com ``` -**(V-214267)** The Apache web server must be protected from being stopped by a non-privileged user. +**(V-214267)** The Apache web server must be protected from being stopped by a non-privileged user. -**Severity:** Cat II Medium -**Type:** Technical -**Fix:** None, Fixed by Rocky Linux 8 by default +**Severity:** Cat II Medium +**Type:** Technical +**Fix:** None, Fixed by Rocky Linux 8 by default **(V-214262)** The Apache web server must use a logging mechanism that is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server. -**Severity:** Cat II Medium -**Type:** Operational -**Fix:** none, Work with the SIEM administrator to determine if the SIEM is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server. +**Severity:** Cat II Medium +**Type:** Operational +**Fix:** none, Work with the SIEM administrator to determine if the SIEM is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server. **(V-214272)** The Apache web server must be configured in accordance with the security configuration settings based on DoD security configuration or implementation guidance, including STIGs, NSA configuration guides, CTOs, and DTMs. -**Severity:** Cat III Low -**Type:** Operational -**Fix:** None +**Severity:** Cat III Low +**Type:** Operational +**Fix:** None ## About The Author Scott Shinn is the CTO for Atomicorp, and part of the Rocky Linux Security team. He has been involved with federal information systems at the White House, Department of Defense, and Intelligence Community since 1995. Part of that was creating STIG’s and the requirement th at you use them and I am so very sorry about that. - diff --git a/docs/books/index.md b/docs/books/index.md index 1e8e6ac52c..47dc900260 100644 --- a/docs/books/index.md +++ b/docs/books/index.md @@ -9,6 +9,7 @@ contributors: @fromoz, Ganna Zhyrnova You have found the **Books** section of the documentation. This is where longer-form documentation is kept. These documents are broken down into sections or **_chapters_** to make it easy for you to work through them at your own pace and keeping track of your progress. These documents were created by people just like you, with a passion for certain subjects. Would you like to try your hand at writing an addition to this section? If so, That would be GREAT! Simply join the conversation on the [Mattermost Documentation channel](https://chat.rockylinux.org/rocky-linux/channels/documentation) and we will help you on your way. + ## Download for offline reading Our books can be downloaded in PDF format for offline reading. diff --git a/docs/books/learning_ansible/01-basic.md b/docs/books/learning_ansible/01-basic.md index a2c941c24b..67f667ddf8 100644 --- a/docs/books/learning_ansible/01-basic.md +++ b/docs/books/learning_ansible/01-basic.md @@ -13,13 +13,13 @@ In this chapter you will learn how to work with Ansible. **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: Implement Ansible; -:heavy_check_mark: Apply configuration changes on a server; -:heavy_check_mark: Create first Ansible playbooks; +:heavy_check_mark: Implement Ansible; +:heavy_check_mark: Apply configuration changes on a server; +:heavy_check_mark: Create first Ansible playbooks; :checkered_flag: **ansible**, **module**, **playbook** -**Knowledge**: :star: :star: :star: +**Knowledge**: :star: :star: :star: **Complexity**: :star: :star: **Reading time**: 30 minutes @@ -87,7 +87,7 @@ To offer a graphical interface to your daily use of Ansible, you can install som Ansible is available in the _EPEL_ repository, but may sometimes be too old for the current version, and you'll want to work with a more recent version. -We will therefore consider two types of installation: +We will therefore consider two types of installation: * the one based on EPEL repositories * one based on the `pip` python package manager @@ -96,21 +96,21 @@ The _EPEL_ is required for both versions, so you can go ahead and install that n * EPEL installation: -``` -$ sudo dnf install epel-release +```bash +sudo dnf install epel-release ``` ### Installation from EPEL If we install Ansible from the _EPEL_, we can do the following: -``` -$ sudo dnf install ansible +```bash +sudo dnf install ansible ``` And then verify the installation: -``` +```bash $ ansible --version ansible [core 2.14.2] config file = /etc/ansible/ansible.cfg @@ -138,8 +138,8 @@ As we want to use a newer version of Ansible, we will install it from `python3-p At this stage, we can choose to install ansible with the version of python we want. -``` -$ sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust cargo curl +```bash +sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust cargo curl ``` !!! Note @@ -149,14 +149,14 @@ $ sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust We can now install Ansible: -``` -$ pip3.8 install --user ansible -$ activate-global-python-argcomplete --user +```bash +pip3.8 install --user ansible +activate-global-python-argcomplete --user ``` Check your Ansible version: -``` +```bash $ ansible --version ansible [core 2.13.11] config file = None @@ -184,7 +184,7 @@ There are two main configuration files: The configuration file would automatically be created if Ansible was installed with its RPM package. With a `pip` installation, this file does not exist. We'll have to create it by hand thanks to the `ansible-config` command: -``` +```bash $ ansible-config -h usage: ansible-config [-h] [--version] [-v] {list,dump,view,init} ... @@ -200,7 +200,7 @@ positional arguments: Example: -``` +```bash ansible-config init --disabled > /etc/ansible/ansible.cfg ``` @@ -224,7 +224,7 @@ It is sometimes necessary to think carefully about how to build this file. Go to the default inventory file, which is located under `/etc/ansible/hosts`. Some examples are provided and commented: -``` +```text # This is the default ansible 'hosts' file. # # It should live in /etc/ansible/hosts @@ -278,7 +278,7 @@ The inventory can be generated automatically in production, especially if you ha As you may have noticed, the groups are declared in square brackets. Then come the elements belonging to the groups. You can create, for example, a `rocky8` group by inserting the following block into this file: -``` +```bash [rocky8] 172.16.1.10 172.16.1.11 @@ -286,7 +286,7 @@ As you may have noticed, the groups are declared in square brackets. Then come t Groups can be used within other groups. In this case, it must be specified that the parent group is composed of subgroups with the `:children` attribute like this: -``` +```bash [linux:children] rocky8 debian9 @@ -310,7 +310,7 @@ Now that our management server is installed and our inventory is ready, it's tim The `ansible` command launches a task on one or more target hosts. -``` +```bash ansible [-m module_name] [-a args] [options] ``` @@ -322,37 +322,37 @@ Examples: * List the hosts belonging to the rocky8 group: -``` +```bash ansible rocky8 --list-hosts ``` * Ping a host group with the `ping` module: -``` +```bash ansible rocky8 -m ping ``` * Display facts from a host group with the `setup` module: -``` +```bash ansible rocky8 -m setup ``` * Run a command on a host group by invoking the `command` module with arguments: -``` +```bash ansible rocky8 -m command -a 'uptime' ``` * Run a command with administrator privileges: -``` +```bash ansible ansible_clients --become -m command -a 'reboot' ``` * Run a command using a custom inventory file: -``` +```bash ansible rocky8 -i ./local-inventory -m command -a 'date' ``` @@ -360,7 +360,7 @@ ansible rocky8 -i ./local-inventory -m command -a 'date' As in this example, it is sometimes simpler to separate the declaration of managed devices into several files (by cloud project for example) and provide Ansible with the path to these files, rather than to maintain a long inventory file. -| Option | Information | +| Option | Information | |--------------------------|-------------------------------------------------------------------------------------------------| | `-a 'arguments'` | The arguments to pass to the module. | | `-b -K` | Requests a password and runs the command with higher privileges. | @@ -380,26 +380,26 @@ This user will be used: On both machines, create an `ansible` user, dedicated to ansible: -``` -$ sudo useradd ansible -$ sudo usermod -aG wheel ansible +```bash +sudo useradd ansible +sudo usermod -aG wheel ansible ``` Set a password for this user: -``` -$ sudo passwd ansible +```bash +sudo passwd ansible ``` Modify the sudoers config to allow members of the `wheel` group to sudo without password: -``` -$ sudo visudo +```bash +sudo visudo ``` Our goal here is to comment out the default, and uncomment the NOPASSWD option so that these lines look like this when we are done: -``` +```bash ## Allows people in group wheel to run all commands # %wheel ALL=(ALL) ALL @@ -414,8 +414,8 @@ Our goal here is to comment out the default, and uncomment the NOPASSWD option s When using management from this point on, start working with this new user: -``` -$ sudo su - ansible +```bash +sudo su - ansible ``` ### Test with the ping module @@ -424,13 +424,13 @@ By default, password login is not allowed by Ansible. Uncomment the following line from the `[defaults]` section in the `/etc/ansible/ansible.cfg` configuration file and set it to True: -``` +```bash ask_pass = True ``` Run a `ping` on each server of the rocky8 group: -``` +```bash # ansible rocky8 -m ping SSH password: 172.16.1.10 | SUCCESS => { @@ -467,7 +467,7 @@ Password authentication will be replaced by a much more secure private/public ke The dual-key will be generated with the command `ssh-keygen` on the management station by the `ansible` user: -``` +```bash [ansible]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ansible/.ssh/id_rsa): @@ -494,14 +494,14 @@ The key's randomart image is: The public key can be copied to the servers: -``` +```bash # ssh-copy-id ansible@172.16.1.10 # ssh-copy-id ansible@172.16.1.11 ``` Re-comment the following line from the `[defaults]` section in the `/etc/ansible/ansible.cfg` configuration file to prevent password authentication: -``` +```bash #ask_pass = True ``` @@ -509,7 +509,7 @@ Re-comment the following line from the `[defaults]` section in the `/etc/ansible For the next test, the `shell` module, allowing remote command execution, is used: -``` +```bash # ansible rocky8 -m shell -a "uptime" 172.16.1.10 | SUCCESS | rc=0 >> 12:36:18 up 57 min, 1 user, load average: 0.00, 0.00, 0.00 @@ -538,7 +538,7 @@ Collections are a distribution format for Ansible content that can include playb A module is invoked with the `-m` option of the `ansible` command: -``` +```bash ansible [-m module_name] [-a args] [options] ``` @@ -562,7 +562,7 @@ Each category of need has its own module. Here is a non-exhaustive list: The `dnf` module allows for the installation of software on the target clients: -``` +```bash # ansible rocky8 --become -m dnf -a name="httpd" 172.16.1.10 | SUCCESS => { "changed": true, @@ -586,7 +586,7 @@ The `dnf` module allows for the installation of software on the target clients: The installed software being a service, it is now necessary to start it with the module `systemd`: -``` +```bash # ansible rocky8 --become -m systemd -a "name=httpd state=started" 172.16.1.10 | SUCCESS => { "changed": true, @@ -630,7 +630,7 @@ Take a look at the different facts of your clients to get an idea of the amount We'll see later how to use facts in our playbooks and how to create our own facts. -``` +```bash # ansible ansible_clients -m setup | less 192.168.1.11 | SUCCESS => { "ansible_facts": { @@ -665,7 +665,7 @@ Ansible's playbooks describe a policy to be applied to remote systems, to force Learn more about [yaml here](https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html) -``` +```bash ansible-playbook ... [options] ``` @@ -694,7 +694,7 @@ The following playbook allows us to install Apache and MariaDB on our target ser Create a `test.yml` file with the following content: -``` +```bash --- - hosts: rocky8 <1> become: true <2> @@ -721,7 +721,7 @@ Create a `test.yml` file with the following content: The execution of the playbook is done with the command `ansible-playbook`: -``` +```bash $ ansible-playbook test.yml PLAY [rocky8] **************************************************************** @@ -753,7 +753,7 @@ PLAY RECAP ********************************************************************* For more readability, it is recommended to write your playbooks in full yaml format. In the previous example, the arguments are given on the same line as the module, the value of the argument following its name separated by an `=`. Look at the same playbook in full yaml: -``` +```bash --- - hosts: rocky8 become: true @@ -790,14 +790,15 @@ For more readability, it is recommended to write your playbooks in full yaml for Note about collections: Ansible now provides modules in the form of collections. Some modules are provided by default within the `ansible.builtin` collection, others must be installed manually via the: -``` +```bash ansible-galaxy collection install [collectionname] ``` + where [collectionname] is the name of the collection (the square brackets here are used to highlight the need to replace this with an actual collection name, and are NOT part of the command). The previous example should be written like this: -``` +```bash --- - hosts: rocky8 become: true @@ -829,7 +830,7 @@ The previous example should be written like this: A playbook is not limited to one target: -``` +```bash --- - hosts: webservers become: true @@ -865,19 +866,19 @@ A playbook is not limited to one target: You can check the syntax of your playbook: -``` -$ ansible-playbook --syntax-check play.yml +```bash +ansible-playbook --syntax-check play.yml ``` You can also use a "linter" for yaml: -``` -$ dnf install -y yamllint +```bash +dnf install -y yamllint ``` then check the yaml syntax of your playbooks: -``` +```bash $ yamllint test.yml test.yml 8:1 error syntax error: could not find expected ':' (syntax) @@ -895,7 +896,7 @@ test.yml * Update your client distribution * Restart your client -``` +```bash ansible ansible_clients --become -m group -a "name=Paris" ansible ansible_clients --become -m group -a "name=Tokio" ansible ansible_clients --become -m group -a "name=NewYork" diff --git a/docs/books/learning_ansible/02-advanced.md b/docs/books/learning_ansible/02-advanced.md index cbcffc508b..b93e26d418 100644 --- a/docs/books/learning_ansible/02-advanced.md +++ b/docs/books/learning_ansible/02-advanced.md @@ -10,14 +10,14 @@ In this chapter you will continue to learn how to work with Ansible. **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: work with variables; -:heavy_check_mark: use loops; -:heavy_check_mark: manage state changes and react to them; +:heavy_check_mark: work with variables; +:heavy_check_mark: use loops; +:heavy_check_mark: manage state changes and react to them; :heavy_check_mark: manage asynchronous tasks. :checkered_flag: **ansible**, **module**, **playbook** -**Knowledge**: :star: :star: :star: +**Knowledge**: :star: :star: :star: **Complexity**: :star: :star: **Reading time**: 30 minutes @@ -49,7 +49,7 @@ A variable can be defined in different places, like in a playbook, in a role or For example, from a playbook: -``` +```bash --- - hosts: apache1 vars: @@ -61,8 +61,8 @@ For example, from a playbook: or from the command line: -``` -$ ansible-playbook deploy-http.yml --extra-vars "service=httpd" +```bash +ansible-playbook deploy-http.yml --extra-vars "service=httpd" ``` Once defined, a variable can be used by calling it between double braces: @@ -72,7 +72,7 @@ Once defined, a variable can be used by calling it between double braces: For example: -``` +```bash - name: make sure apache is started ansible.builtin.systemd: name: "{{ service['rhel'] }}" @@ -85,7 +85,7 @@ Of course, it is also possible to access the global variables (the **facts**) of Variables can be included in a file external to the playbook, in which case this file must be defined in the playbook with the `vars_files` directive: -``` +```bash --- - hosts: apache1 vars_files: @@ -94,7 +94,7 @@ Variables can be included in a file external to the playbook, in which case this The `myvariables.yml` file: -``` +```bash --- port_http: 80 ansible.builtin.systemd:: @@ -104,7 +104,7 @@ ansible.builtin.systemd:: It can also be added dynamically with the use of the module `include_vars`: -``` +```bash - name: Include secrets. ansible.builtin.include_vars: file: vault.yml @@ -114,14 +114,14 @@ It can also be added dynamically with the use of the module `include_vars`: To display a variable, you have to activate the `debug` module as follows: -``` +```bash - ansible.builtin.debug: var: service['debian'] ``` You can also use the variable inside a text: -``` +```bash - ansible.builtin.debug: msg: "Print a variable in a message : {{ service['debian'] }}" ``` @@ -132,7 +132,7 @@ To save the return of a task and to be able to access it later, you have to use Use of a stored variable: -``` +```bash - name: /home content shell: ls /home register: homes @@ -152,13 +152,13 @@ Use of a stored variable: The strings that make up the stored variable can be accessed via the `stdout` value (which allows you to do things like `homes.stdout.find("core") != -1`), to exploit them using a loop (see `loop`), or simply by their indices as seen in the previous example. -### Exercises +### Exercises-1 * Write a playbook `play-vars.yml` that prints the distribution name of the target with its major version, using global variables. * Write a playbook using the following dictionary to display the services that will be installed: -``` +```bash service: web: name: apache @@ -184,7 +184,7 @@ With the help of loop, you can iterate a task over a list, a hash, or dictionary Simple example of use, creation of 4 users: -``` +```bash - name: add users user: name: "{{ item }}" @@ -201,7 +201,7 @@ At each iteration of the loop, the value of the list used is stored in the `item Of course, a list can be defined in an external file: -``` +```bash users: - antoine - patrick @@ -211,7 +211,7 @@ users: and be used inside the task like this (after having include the vars file): -``` +```bash - name: add users user: name: "{{ item }}" @@ -222,7 +222,7 @@ and be used inside the task like this (after having include the vars file): We can use the example seen during the study of stored variables to improve it. Use of a stored variable: -``` +```bash - name: /home content shell: ls /home register: homes @@ -241,7 +241,7 @@ In the loop, it becomes possible to use `item.key` which corresponds to the dict Let's see this through a concrete example, showing the management of the system users: -``` +```bash --- - hosts: rocky8 become: true @@ -269,7 +269,7 @@ Let's see this through a concrete example, showing the management of the system Many things can be done with the loops. You will discover the possibilities offered by loops when your use of Ansible pushes you to use them in a more complex way. -### Exercises +### Exercises-2 * Display the content of the `service` variable from the previous exercise using a loop. @@ -293,7 +293,7 @@ The `when` statement is very useful in many cases: not performing certain action Behind the `when` statement the variables do not need double braces (they are in fact Jinja2 expressions...). -``` +```bash - name: "Reboot only Debian servers" reboot: when: ansible_os_family == "Debian" @@ -301,7 +301,7 @@ The `when` statement is very useful in many cases: not performing certain action Conditions can be grouped with parentheses: -``` +```bash - name: "Reboot only CentOS version 6 and Debian version 7" reboot: when: (ansible_distribution == "CentOS" and ansible_distribution_major_version == "6") or @@ -310,7 +310,7 @@ Conditions can be grouped with parentheses: The conditions corresponding to a logical AND can be provided as a list: -``` +```bash - name: "Reboot only CentOS version 6" reboot: when: @@ -320,7 +320,7 @@ The conditions corresponding to a logical AND can be provided as a list: You can test the value of a boolean and verify that it is true: -``` +```bash - name: check if directory exists stat: path: /home/ansible @@ -338,19 +338,19 @@ You can test the value of a boolean and verify that it is true: You can also test that it is not true: -``` - when: - - file.stat.exists - - not file.stat.isdir +```bash +when: + - file.stat.exists + - not file.stat.isdir ``` You will probably have to test that a variable exists to avoid execution errors: -``` - when: myboolean is defined and myboolean +```bash +when: myboolean is defined and myboolean ``` -### Exercises +### Exercises-3 * Print the value of `service.web` only when `type` equals to `web`. @@ -368,7 +368,7 @@ A module, being idempotent, a playbook can detect that there has been a signific For example, several tasks may indicate that the `httpd` service needs to be restarted due to a change in its configuration files. But the service will only be restarted once to avoid multiple unnecessary starts. -``` +```bash - name: template configuration file template: src: template-site.j2 @@ -385,7 +385,7 @@ A handler is a kind of task referenced by a unique global name: Example of handlers: -``` +```bash handlers: - name: restart memcached @@ -401,7 +401,7 @@ handlers: Since version 2.2 of Ansible, handlers can listen directly as well: -``` +```bash handlers: - name: restart memcached @@ -441,7 +441,7 @@ By specifying a poll value of 0, Ansible will execute the task and continue with Here's an example using asynchronous tasks, which allows you to restart a server and wait for port 22 to be reachable again: -``` +```bash # Wait 2s and launch the reboot - name: Reboot system shell: sleep 2 && shutdown -r now "Ansible reboot triggered" @@ -468,7 +468,7 @@ You can also decide to launch a long-running task and forget it (fire and forget * Write a playbook `play-vars.yml` that print the distribution name of the target with its major version, using global variables. -``` +```bash --- - hosts: ansible_clients @@ -479,7 +479,7 @@ You can also decide to launch a long-running task and forget it (fire and forget msg: "The distribution is {{ ansible_distribution }} version {{ ansible_distribution_major_version }}" ``` -``` +```bash $ ansible-playbook play-vars.yml PLAY [ansible_clients] ********************************************************************************* @@ -499,7 +499,7 @@ PLAY RECAP ********************************************************************* * Write a playbook using the following dictionary to display the services that will be installed: -``` +```bash service: web: name: apache @@ -511,7 +511,7 @@ service: The default type should be "web". -``` +```bash --- - hosts: ansible_clients vars: @@ -531,7 +531,7 @@ The default type should be "web". msg: "The {{ service[type]['name'] }} will be installed with the packages {{ service[type].rpm }}" ``` -``` +```bash $ ansible-playbook display-dict.yml PLAY [ansible_clients] ********************************************************************************* @@ -551,7 +551,7 @@ PLAY RECAP ********************************************************************* * Override the `type` variable using the command line: -``` +```bash ansible-playbook --extra-vars "type=db" display-dict.yml PLAY [ansible_clients] ********************************************************************************* @@ -570,7 +570,7 @@ PLAY RECAP ********************************************************************* * Externalize variables in a `vars.yml` file -``` +```bash type: web service: web: @@ -581,7 +581,7 @@ service: rpm: mariadb-server ``` -``` +```bash --- - hosts: ansible_clients vars_files: @@ -594,7 +594,6 @@ service: msg: "The {{ service[type]['name'] }} will be installed with the packages {{ service[type].rpm }}" ``` - * Display the content of the `service` variable from the previous exercise using a loop. !!! Note @@ -611,7 +610,7 @@ service: With `dict2items`: -``` +```bash --- - hosts: ansible_clients vars_files: @@ -625,7 +624,7 @@ With `dict2items`: loop: "{{ service | dict2items }}" ``` -``` +```bash $ ansible-playbook display-dict.yml PLAY [ansible_clients] ********************************************************************************* @@ -648,7 +647,7 @@ PLAY RECAP ********************************************************************* With `list`: -``` +```bash --- - hosts: ansible_clients vars_files: @@ -663,7 +662,7 @@ With `list`: ~ ``` -``` +```bash $ ansible-playbook display-dict.yml PLAY [ansible_clients] ********************************************************************************* @@ -685,7 +684,7 @@ PLAY RECAP ********************************************************************* * Print the value of `service.web` only when `type` equals to `web`. -``` +```bash --- - hosts: ansible_clients vars_files: @@ -705,7 +704,7 @@ PLAY RECAP ********************************************************************* when: type == "db" ``` -``` +```bash $ ansible-playbook display-dict.yml PLAY [ansible_clients] ********************************************************************************* diff --git a/docs/books/learning_ansible/03-working-with-files.md b/docs/books/learning_ansible/03-working-with-files.md index a2bf1c590a..8bf20f57aa 100644 --- a/docs/books/learning_ansible/03-working-with-files.md +++ b/docs/books/learning_ansible/03-working-with-files.md @@ -10,13 +10,13 @@ In this chapter you will learn how to manage files with Ansible. **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: modify the content of file; -:heavy_check_mark: upload files to the targeted servers; -:heavy_check_mark: retrieve files from the targeted servers. +:heavy_check_mark: modify the content of file; +:heavy_check_mark: upload files to the targeted servers; +:heavy_check_mark: retrieve files from the targeted servers. :checkered_flag: **ansible**, **module**, **files** -**Knowledge**: :star: :star: +**Knowledge**: :star: :star: **Complexity**: :star: **Reading time**: 20 minutes @@ -41,7 +41,7 @@ The module requires: Example of use: -``` +```bash - name: change value on inifile community.general.ini_file: dest: /path/to/file.ini @@ -62,7 +62,7 @@ In this case, the line to be modified in a file will be found using a regexp. For example, to ensure that the line starting with `SELINUX=` in the `/etc/selinux/config` file contains the value `enforcing`: -``` +```bash - ansible.builtin.lineinfile: path: /etc/selinux/config regexp: '^SELINUX=' @@ -79,7 +79,7 @@ When a file has to be copied from the Ansible server to one or more hosts, it is Here we are copying `myflile.conf` from one location to another: -``` +```bash - ansible.builtin.copy: src: /data/ansible/sources/myfile.conf dest: /etc/myfile.conf @@ -98,7 +98,7 @@ When a file has to be copied from a remote server to the local server, it is bes This module does the opposite of the `copy` module: -``` +```bash - ansible.builtin.fetch: src: /etc/myfile.conf dest: /data/ansible/backup/myfile-{{ inventory_hostname }}.conf @@ -107,7 +107,7 @@ This module does the opposite of the `copy` module: ## `template` module -Ansible and its `template` module use the **Jinja2** template system (http://jinja.pocoo.org/docs/) to generate files on target hosts. +Ansible and its `template` module use the **Jinja2** template system () to generate files on target hosts. !!! Note @@ -115,7 +115,7 @@ Ansible and its `template` module use the **Jinja2** template system (http://jin For example: -``` +```bash - ansible.builtin.template: src: /data/ansible/templates/monfichier.j2 dest: /etc/myfile.conf @@ -126,7 +126,7 @@ For example: It is possible to add a validation step if the targeted service allows it (for example apache with the command `apachectl -t`): -``` +```bash - template: src: /data/ansible/templates/vhost.j2 dest: /etc/httpd/sites-available/vhost.conf @@ -140,7 +140,7 @@ It is possible to add a validation step if the targeted service allows it (for e To upload files from a web site or ftp to one or more hosts, use the `get_url` module: -``` +```bash - get_url: url: http://site.com/archive.zip dest: /tmp/archive.zip diff --git a/docs/books/learning_ansible/04-ansible-galaxy.md b/docs/books/learning_ansible/04-ansible-galaxy.md index 59cfb19dc8..4764fd805e 100644 --- a/docs/books/learning_ansible/04-ansible-galaxy.md +++ b/docs/books/learning_ansible/04-ansible-galaxy.md @@ -10,12 +10,12 @@ In this chapter you will learn how to use, install, and manage Ansible roles and **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: install and manage collections. -:heavy_check_mark: install and manage roles. +:heavy_check_mark: install and manage collections. +:heavy_check_mark: install and manage roles. :checkered_flag: **ansible**, **ansible-galaxy**, **roles**, **collections** -**Knowledge**: :star: :star: +**Knowledge**: :star: :star: **Complexity**: :star: :star: :star: **Reading time**: 40 minutes @@ -32,7 +32,7 @@ The `ansible-galaxy` command manages roles and collections using [galaxy.ansible * To manage roles: -``` +```bash ansible-galaxy role [import|init|install|login|remove|...] ``` @@ -47,7 +47,7 @@ ansible-galaxy role [import|init|install|login|remove|...] * To manage collections: -``` +```bash ansible-galaxy collection [import|init|install|login|remove|...] ``` @@ -73,13 +73,13 @@ You can check the code in the github repo of the role [here](https://github.com/ * Install the role. This needs only one command: -``` +```bash ansible-galaxy role install alemorvan.patchmanagement ``` * Create a playbook to include the role: -``` +```bash - name: Start a Patch Management hosts: ansible_clients vars: @@ -98,13 +98,13 @@ Let's create tasks that will be run before and after the update process: * Create the `custom_tasks` folder: -``` +```bash mkdir custom_tasks ``` * Create the `custom_tasks/pm_before_update_tasks_file.yml` (feel free to change the name and the content of this file) -``` +```bash --- - name: sample task before the update process debug: @@ -113,7 +113,7 @@ mkdir custom_tasks * Create the `custom_tasks/pm_after_update_tasks_file.yml` (feel free to change the name and the content of this file) -``` +```bash --- - name: sample task after the update process debug: @@ -122,7 +122,7 @@ mkdir custom_tasks And launch your first Patch Management: -``` +```bash ansible-playbook patchmanagement.yml PLAY [Start a Patch Management] ************************************************************************* @@ -210,14 +210,14 @@ You can also create your own roles for your own needs and publish them on the In A role skeleton, serving as a starting point for custom role development, can be generated by the `ansible-galaxy` command: -``` +```bash $ ansible-galaxy role init rocky8 - Role rocky8 was created successfully ``` The command will generate the following tree structure to contain the `rocky8` role: -``` +```bash tree rocky8/ rocky8/ ├── defaults @@ -260,7 +260,7 @@ Let's implement this with a "go anywhere" role that will create a default user a We will create a `rockstar` user on all of our servers. As we don't want this user to be overridden, let's define it in the `vars/main.yml`: -``` +```bash --- rocky8_default_group: name: rockstar @@ -273,7 +273,7 @@ rocky8_default_user: We can now use those variables inside our `tasks/main.yml` without any inclusion. -``` +```bash --- - name: Create default group group: @@ -289,7 +289,7 @@ We can now use those variables inside our `tasks/main.yml` without any inclusion To test your new role, let's create a `test-role.yml` playbook in the same directory as your role: -``` +```bash --- - name: Test my role hosts: localhost @@ -303,7 +303,7 @@ To test your new role, let's create a `test-role.yml` playbook in the same direc and launch it: -``` +```bash ansible-playbook test-role.yml PLAY [Test my role] ************************************************************************************ @@ -327,7 +327,7 @@ Let's see the use of default variables. Create a list of packages to install by default on your servers and an empty list of packages to uninstall. Edit the `defaults/main.yml` files and add those two lists: -``` +```bash rocky8_default_packages: - tree - vim @@ -336,7 +336,7 @@ rocky8_remove_packages: [] and use them in your `tasks/main.yml`: -``` +```bash - name: Install default packages (can be overridden) package: name: "{{ rocky8_default_packages }}" @@ -350,7 +350,7 @@ and use them in your `tasks/main.yml`: Test your role with the help of the playbook previously created: -``` +```bash ansible-playbook test-role.yml PLAY [Test my role] ************************************************************************************ @@ -376,7 +376,7 @@ localhost : ok=5 changed=0 unreachable=0 failed=0 s You can now override the `rocky8_remove_packages` in your playbook and uninstall for example `cockpit`: -``` +```bash --- - name: Test my role hosts: localhost @@ -391,7 +391,7 @@ You can now override the `rocky8_remove_packages` in your playbook and uninstall become_user: root ``` -``` +```bash ansible-playbook test-role.yml PLAY [Test my role] ************************************************************************************ @@ -417,7 +417,7 @@ localhost : ok=5 changed=1 unreachable=0 failed=0 s Obviously, there is no limit to how much you can improve your role. Imagine that for one of your servers, you need a package that is in the list of those to be uninstalled. You could then, for example, create a new list that can be overridden and then remove from the list of packages to be uninstalled those in the list of specific packages to be installed by using the jinja `difference()` filter. -``` +```bash - name: "Uninstall default packages (can be overridden) {{ rocky8_remove_packages }}" package: name: "{{ rocky8_remove_packages | difference(rocky8_specifics_packages) }}" @@ -434,13 +434,13 @@ Collections are a distribution format for Ansible content that can include playb To install or upgrade a collection: -``` +```bash ansible-galaxy collection install namespace.collection [--upgrade] ``` You can then use the newly installed collection using its namespace and name before the module's name or role's name: -``` +```bash - import_role: name: namespace.collection.rolename @@ -452,7 +452,7 @@ You can find a collection index [here](https://docs.ansible.com/ansible/latest/c Let's install the `community.general` collection: -``` +```bash ansible-galaxy collection install community.general Starting galaxy collection install process Process install dependency map @@ -464,7 +464,7 @@ community.general:3.3.2 was installed successfully We can now use the newly available module `yum_versionlock`: -``` +```bash - name: Start a Patch Management hosts: ansible_clients become: true @@ -487,7 +487,7 @@ We can now use the newly available module `yum_versionlock`: var: locks.meta.packages ``` -``` +```bash ansible-playbook versionlock.yml PLAY [Start a Patch Management] ************************************************************************* @@ -517,12 +517,12 @@ PLAY RECAP ********************************************************************* As with roles, you are able to create your own collection with the help of the `ansible-galaxy` command: -``` +```bash ansible-galaxy collection init rocky8.rockstarcollection - Collection rocky8.rockstarcollection was created successfully ``` -``` +```bash tree rocky8/rockstarcollection/ rocky8/rockstarcollection/ ├── docs diff --git a/docs/books/learning_ansible/05-deployments.md b/docs/books/learning_ansible/05-deployments.md index ca9c23c826..1da7a15c58 100644 --- a/docs/books/learning_ansible/05-deployments.md +++ b/docs/books/learning_ansible/05-deployments.md @@ -10,15 +10,15 @@ In this chapter you will learn how to deploy applications with the Ansible role **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: Implement Ansistrano; -:heavy_check_mark: Configure Ansistrano; -:heavy_check_mark: Use shared folders and files between deployed versions; -:heavy_check_mark: Deploying different versions of a site from git; -:heavy_check_mark: React between deployment steps. +:heavy_check_mark: Implement Ansistrano; +:heavy_check_mark: Configure Ansistrano; +:heavy_check_mark: Use shared folders and files between deployed versions; +:heavy_check_mark: Deploying different versions of a site from git; +:heavy_check_mark: React between deployment steps. :checkered_flag: **ansible**, **ansistrano**, **roles**, **deployments** -**Knowledge**: :star: :star: +**Knowledge**: :star: :star: **Complexity**: :star: :star: :star: **Reading time**: 40 minutes @@ -52,7 +52,7 @@ Ansistrano deploys applications by following these 5 steps: The skeleton of a deployment with Ansistrano looks like this: -``` +```bash /var/www/site/ ├── current -> ./releases/20210718100000Z ├── releases @@ -84,7 +84,7 @@ The managed server: For more efficiency, we will use the `geerlingguy.apache` role to configure the server: -``` +```bash $ ansible-galaxy role install geerlingguy.apache Starting galaxy role install process - downloading role 'apache', owned by geerlingguy @@ -95,7 +95,7 @@ Starting galaxy role install process We will probably need to open some firewall rules, so we will also install the collection `ansible.posix` to work with its module `firewalld`: -``` +```bash $ ansible-galaxy collection install ansible.posix Starting galaxy collection install process Process install dependency map @@ -126,7 +126,7 @@ Technical considerations: Our playbook to configure the server: `playbook-config-server.yml` -``` +```bash --- - hosts: ansible_clients become: yes @@ -137,27 +137,27 @@ Our playbook to configure the server: `playbook-config-server.yml` DirectoryIndex index.php index.htm apache_vhosts: - servername: "website" - documentroot: "{{ dest }}current/html" + documentroot: "{{ dest }}current/html" tasks: - name: create directory for website file: - path: /var/www/site/ - state: directory - mode: 0755 + path: /var/www/site/ + state: directory + mode: 0755 - name: install git package: - name: git - state: latest + name: git + state: latest - name: permit traffic in default zone for http service ansible.posix.firewalld: - service: http - permanent: yes - state: enabled - immediate: yes + service: http + permanent: yes + state: enabled + immediate: yes roles: - { role: geerlingguy.apache } @@ -165,13 +165,13 @@ Our playbook to configure the server: `playbook-config-server.yml` The playbook can be applied to the server: -``` -$ ansible-playbook playbook-config-server.yml +```bash +ansible-playbook playbook-config-server.yml ``` Note the execution of the following tasks: -``` +```bash TASK [geerlingguy.apache : Ensure Apache is installed on RHEL.] **************** TASK [geerlingguy.apache : Configure Apache.] ********************************** TASK [geerlingguy.apache : Add apache vhosts configuration.] ******************* @@ -184,7 +184,7 @@ The `geerlingguy.apache` role makes our job much easier by taking care of the in You can check that everything is working by using `curl`: -``` +```bash $ curl -I http://192.168.1.11 HTTP/1.1 404 Not Found Date: Mon, 05 Jul 2021 23:30:02 GMT @@ -202,7 +202,7 @@ Now that our server is configured, we can deploy the application. For this, we will use the `ansistrano.deploy` role in a second playbook dedicated to application deployment (for more readability). -``` +```bash $ ansible-galaxy role install ansistrano.deploy Starting galaxy role install process - downloading role 'deploy', owned by ansistrano @@ -216,7 +216,7 @@ The sources of the software can be found in the [github repository](https://gith We will create a playbook `playbook-deploy.yml` to manage our deployment: -``` +```bash --- - hosts: ansible_clients become: yes @@ -231,7 +231,7 @@ We will create a playbook `playbook-deploy.yml` to manage our deployment: - { role: ansistrano.deploy } ``` -``` +```bash $ ansible-playbook playbook-deploy.yml PLAY [ansible_clients] ********************************************************* @@ -258,13 +258,13 @@ TASK [ansistrano.deploy : ANSISTRANO | Change softlink to new release] TASK [ansistrano.deploy : ANSISTRANO | Clean up releases] PLAY RECAP ******************************************************************************************************************************************************************************************************** -192.168.1.11 : ok=25 changed=8 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 +192.168.1.11 : ok=25 changed=8 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 ``` So many things done with only 11 lines of code! -``` +```html $ curl http://192.168.1.11 @@ -282,7 +282,7 @@ You can now connect by ssh to your client machine. * Make a `tree` on the `/var/www/site/` directory: -``` +```bash $ tree /var/www/site/ /var/www/site ├── current -> ./releases/20210722155312Z @@ -290,7 +290,7 @@ $ tree /var/www/site/ │   └── 20210722155312Z │   ├── REVISION │   └── html -│   └── index.htm +│   └── index.htm ├── repo │   └── html │   └── index.htm @@ -305,7 +305,7 @@ Please note: * From the Ansible server, restart the deployment **3** times, then check on the client. -``` +```bash $ tree /var/www/site/ var/www/site ├── current -> ./releases/20210722160048Z @@ -325,7 +325,7 @@ var/www/site │   └── 20210722160048Z │   ├── REVISION │   └── html -│   └── index.htm +│   └── index.htm ├── repo │   └── html │   └── index.htm @@ -343,7 +343,7 @@ The `ansistrano_keep_releases` variable is used to specify the number of release * Using the `ansistrano_keep_releases` variable, keep only 3 releases of the project. Check. -``` +```bash --- - hosts: ansible_clients become: yes @@ -359,14 +359,14 @@ The `ansistrano_keep_releases` variable is used to specify the number of release - { role: ansistrano.deploy } ``` -``` +```bash --- $ ansible-playbook -i hosts playbook-deploy.yml ``` On the client machine: -``` +```bash $ tree /var/www/site/ /var/www/site ├── current -> ./releases/20210722160318Z @@ -382,7 +382,7 @@ $ tree /var/www/site/ │   └── 20210722160318Z │   ├── REVISION │   └── html -│   └── index.htm +│   └── index.htm ├── repo │   └── html │   └── index.htm @@ -391,8 +391,7 @@ $ tree /var/www/site/ ### Using shared_paths and shared_files - -``` +```bash --- - hosts: ansible_clients become: yes @@ -415,13 +414,13 @@ $ tree /var/www/site/ On the client machine, create the file `logs` in the `shared` directory: -``` +```bash sudo touch /var/www/site/shared/logs ``` Then execute the playbook: -``` +```bash TASK [ansistrano.deploy : ANSISTRANO | Ensure shared paths targets are absent] ******************************************************* ok: [192.168.10.11] => (item=img) ok: [192.168.10.11] => (item=css) @@ -435,7 +434,7 @@ changed: [192.168.10.11] => (item=logs) On the client machine: -``` +```bash $ tree -F /var/www/site/ /var/www/site/ ├── current -> ./releases/20210722160631Z/ @@ -488,7 +487,7 @@ Don't forget to modify the Apache configuration to take into account this change Change the playbook for the server configuration `playbook-config-server.yml` -``` +```bash --- - hosts: ansible_clients become: yes @@ -499,20 +498,20 @@ Change the playbook for the server configuration `playbook-config-server.yml` DirectoryIndex index.php index.htm apache_vhosts: - servername: "website" - documentroot: "{{ dest }}current/" # <1> + documentroot: "{{ dest }}current/" # <1> tasks: - name: create directory for website file: - path: /var/www/site/ - state: directory - mode: 0755 + path: /var/www/site/ + state: directory + mode: 0755 - name: install git package: - name: git - state: latest + name: git + state: latest roles: - { role: geerlingguy.apache } @@ -522,7 +521,7 @@ Change the playbook for the server configuration `playbook-config-server.yml` Change the playbook for the deployment `playbook-deploy.yml` -``` +```bash --- - hosts: ansible_clients become: yes @@ -550,7 +549,7 @@ Change the playbook for the deployment `playbook-deploy.yml` * Check on the client machine: -``` +```bash $ tree -F /var/www/site/ /var/www/site/ ├── current -> ./releases/20210722161542Z/ @@ -589,7 +588,7 @@ The `ansistrano_git_branch` variable is used to specify a `branch` or `tag` to d * Deploy the `releases/v1.1.0` branch: -``` +```bash --- - hosts: ansible_clients become: yes @@ -616,7 +615,7 @@ The `ansistrano_git_branch` variable is used to specify a `branch` or `tag` to d You can have fun, during the deployment, refreshing your browser, to see in 'live' the change. -``` +```html $ curl http://192.168.1.11 @@ -630,7 +629,7 @@ $ curl http://192.168.1.11 * Deploy the `v2.0.0` tag: -``` +```bash --- - hosts: ansible_clients become: yes @@ -653,7 +652,7 @@ $ curl http://192.168.1.11 - { role: ansistrano.deploy } ``` -``` +```html $ curl http://192.168.1.11 @@ -686,8 +685,7 @@ A playbook can be included through the variables provided for this purpose: * Easy example: send an email (or whatever you want like Slack notification) at the beginning of the deployment: - -``` +```bash --- - hosts: ansible_clients become: yes @@ -713,7 +711,7 @@ A playbook can be included through the variables provided for this purpose: Create the file `deploy/before-setup-tasks.yml`: -``` +```bash --- - name: Send a mail mail: @@ -721,7 +719,7 @@ Create the file `deploy/before-setup-tasks.yml`: delegate_to: localhost ``` -``` +```bash TASK [ansistrano.deploy : include] ************************************************************************************* included: /home/ansible/deploy/before-setup-tasks.yml for 192.168.10.11 @@ -729,7 +727,7 @@ TASK [ansistrano.deploy : Send a mail] ***************************************** ok: [192.168.10.11 -> localhost] ``` -``` +```bash [root] # mailx Heirloom Mail version 12.5 7/5/10. Type ? for help. "/var/spool/mail/root": 1 message 1 new @@ -738,7 +736,7 @@ Heirloom Mail version 12.5 7/5/10. Type ? for help. * You will probably have to restart some services at the end of the deployment, to flush caches for example. Let's restart Apache at the end of the deployment: -``` +```bash --- - hosts: ansible_clients become: yes @@ -765,7 +763,7 @@ Heirloom Mail version 12.5 7/5/10. Type ? for help. Create the file `deploy/after-symlink-tasks.yml`: -``` +```bash --- - name: restart apache systemd: @@ -773,7 +771,7 @@ Create the file `deploy/after-symlink-tasks.yml`: state: restarted ``` -``` +```bash TASK [ansistrano.deploy : include] ************************************************************************************* included: /home/ansible/deploy/after-symlink-tasks.yml for 192.168.10.11 diff --git a/docs/books/learning_ansible/06-large-scale-infrastructure.md b/docs/books/learning_ansible/06-large-scale-infrastructure.md index 91cce4f4a7..16cb1f6a75 100644 --- a/docs/books/learning_ansible/06-large-scale-infrastructure.md +++ b/docs/books/learning_ansible/06-large-scale-infrastructure.md @@ -10,12 +10,12 @@ In this chapter you will learn how to scale your configuration management system **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: Organize your code for large infrastructure; -:heavy_check_mark: Apply all or part of your configuration management to a group of nodes; +:heavy_check_mark: Organize your code for large infrastructure; +:heavy_check_mark: Apply all or part of your configuration management to a group of nodes; :checkered_flag: **ansible**, **config management**, **scale** -**Knowledge**: :star: :star: :star: +**Knowledge**: :star: :star: :star: **Complexity**: :star: :star: :star: :star: **Reading time**: 30 minutes @@ -52,7 +52,7 @@ We haven't discussed it here yet, but you should know that Ansible can automatic The Ansible documentation suggests that we organize our code as below: -``` +```bash inventories/ production/ hosts # inventory file for production servers @@ -82,7 +82,7 @@ The use of Ansible tags allows you to execute or skip a part of the tasks in you For example, let's modify our users creation task: -``` +```bash - name: add users user: name: "{{ item }}" @@ -98,7 +98,7 @@ For example, let's modify our users creation task: You can now play only the tasks with the tag `users` with the `ansible-playbook` option `--tags`: -``` +```bash ansible-playbook -i inventories/production/hosts --tags users site.yml ``` @@ -110,7 +110,7 @@ Let's focus on a proposal for the organization of files and directories necessar Our starting point will be the `site.yml` file. This file is a bit like the orchestra conductor of the CMS since it will only include the necessary roles for the target nodes if needed: -``` +```bash --- - name: "Config Management for {{ target }}" hosts: "{{ target }}" @@ -126,7 +126,7 @@ Of course, those roles must be created under the `roles` directory at the same l I like to manage my global vars inside a `vars/global_vars.yml`, even if I could store them inside a file located at `inventories/production/group_vars/all.yml` -``` +```bash --- - name: "Config Management for {{ target }}" hosts: "{{ target }}" @@ -141,7 +141,7 @@ I like to manage my global vars inside a `vars/global_vars.yml`, even if I could I also like to keep the possibility of disabling a functionality. So I include my roles with a condition and a default value like this: -``` +```bash --- - name: "Config Management for {{ target }}" hosts: "{{ target }}" @@ -160,8 +160,7 @@ I also like to keep the possibility of disabling a functionality. So I include m Don't forget to use the tags: - -``` +```bash - name: "Config Management for {{ target }}" hosts: "{{ target }}" vars_files: @@ -183,7 +182,7 @@ Don't forget to use the tags: You should get something like this: -``` +```bash $ tree cms cms ├── inventories @@ -218,7 +217,7 @@ cms Let's launch the playbook and run some tests: -``` +```bash $ ansible-playbook -i inventories/production/hosts -e "target=client1" site.yml PLAY [Config Management for client1] **************************************************************************** @@ -242,14 +241,13 @@ As you can see, by default, only the tasks of the `functionality1` role are play Let's activate in the inventory the `functionality2` for our targeted node and rerun the playbook: -``` +```bash $ vim inventories/production/host_vars/client1.yml --- enable_functionality2: true ``` - -``` +```bash $ ansible-playbook -i inventories/production/hosts -e "target=client1" site.yml PLAY [Config Management for client1] **************************************************************************** @@ -273,7 +271,7 @@ client1 : ok=3 changed=0 unreachable=0 failed=0 s Try to apply only `functionality2`: -``` +```bash $ ansible-playbook -i inventories/production/hosts -e "target=client1" --tags functionality2 site.yml PLAY [Config Management for client1] **************************************************************************** @@ -292,7 +290,7 @@ client1 : ok=2 changed=0 unreachable=0 failed=0 s Let's run on the whole inventory: -``` +```bash $ ansible-playbook -i inventories/production/hosts -e "target=plateform" site.yml PLAY [Config Management for plateform] ************************************************************************** diff --git a/docs/books/learning_ansible/07-working-with-filters.md b/docs/books/learning_ansible/07-working-with-filters.md index 12b1ce7532..a1d6338596 100644 --- a/docs/books/learning_ansible/07-working-with-filters.md +++ b/docs/books/learning_ansible/07-working-with-filters.md @@ -17,7 +17,7 @@ In this chapter you will learn how to transform data with jinja filters. :checkered_flag: **ansible**, **jinja**, **filters** -**Knowledge**: :star: :star: :star: +**Knowledge**: :star: :star: :star: **Complexity**: :star: :star: :star: :star: **Reading time**: 20 minutes @@ -34,7 +34,7 @@ These filters, written in python, allow us to manipulate and transform our ansib Throughout this chapter, we will use the following playbook to test the different filters presented: -``` +```bash - name: Manipulating the data hosts: localhost gather_facts: false @@ -78,7 +78,7 @@ Throughout this chapter, we will use the following playbook to test the differen The playbook will be played as follows: -``` +```bash ansible-playbook play-filter.yml ``` @@ -90,7 +90,7 @@ To know the type of a data (the type in python language), you have to use the `t Example: -``` +```bash - name: Display the type of a variable debug: var: true_boolean|type_debug @@ -98,7 +98,7 @@ Example: which gives us: -``` +```bash TASK [Display the type of a variable] ****************************************************************** ok: [localhost] => { "true_boolean|type_debug": "bool" @@ -107,13 +107,13 @@ ok: [localhost] => { It is possible to transform an integer into a string: -``` +```bash - name: Transforming a variable type debug: var: zero|string ``` -``` +```bash TASK [Transforming a variable type] *************************************************************** ok: [localhost] => { "zero|string": "0" @@ -122,7 +122,7 @@ ok: [localhost] => { Transform a string into an integer: -``` +```bash - name: Transforming a variable type debug: var: zero_string|int @@ -130,7 +130,7 @@ Transform a string into an integer: or a variable into a boolean: -``` +```bash - name: Display an integer as a boolean debug: var: non_zero | bool @@ -151,7 +151,7 @@ or a variable into a boolean: A character string can be transformed into upper or lower case: -``` +```bash - name: Lowercase a string of characters debug: var: whatever | lower @@ -163,7 +163,7 @@ A character string can be transformed into upper or lower case: which gives us: -``` +```bash TASK [Lowercase a string of characters] ***************************************************** ok: [localhost] => { "whatever | lower": "it's false!" @@ -179,7 +179,7 @@ The `replace` filter allows you to replace characters by others. Here we remove spaces or even replace a word: -``` +```bash - name: Replace a character in a string debug: var: whatever | replace(" ", "") @@ -191,7 +191,7 @@ Here we remove spaces or even replace a word: which gives us: -``` +```bash TASK [Replace a character in a string] ***************************************************** ok: [localhost] => { "whatever | replace(\" \", \"\")": "It'sfalse!" @@ -205,14 +205,13 @@ ok: [localhost] => { The `split` filter splits a string into a list based on a character: -``` +```bash - name: Cutting a string of characters debug: var: whatever | split(" ", "") ``` - -``` +```bash TASK [Cutting a string of characters] ***************************************************** ok: [localhost] => { "whatever | split(\" \")": [ @@ -227,7 +226,7 @@ ok: [localhost] => { It is frequent to have to join the different elements in a single string. We can then specify a character or a string to insert between each element. -``` +```bash - name: Joining elements of a list debug: var: my_simple_list|join(",") @@ -239,7 +238,7 @@ We can then specify a character or a string to insert between each element. which gives us: -``` +```bash TASK [Joining elements of a list] ***************************************************************** ok: [localhost] => { "my_simple_list|join(\",\")": "value_list_1,value_list_2,value_list_3" @@ -259,7 +258,7 @@ are frequently used, especially in loops. Note that it is possible to specify the name of the key and of the value to use in the transformation. -``` +```bash - name: Display a dictionary debug: var: my_dictionary @@ -277,7 +276,7 @@ Note that it is possible to specify the name of the key and of the value to use var: my_list | items2dict(key_name='element', value_name='value') ``` -``` +```bash TASK [Display a dictionary] ************************************************************************* ok: [localhost] => { "my_dictionary": { @@ -327,13 +326,13 @@ ok: [localhost] => { It is possible to merge or filter data from one or more lists: -``` +```bash - name: Merger of two lists debug: var: my_simple_list | union(my_simple_list_2) ``` -``` +```bash ok: [localhost] => { "my_simple_list | union(my_simple_list_2)": [ "value_list_1", @@ -347,13 +346,13 @@ ok: [localhost] => { To keep only the intersection of the 2 lists (the values present in the 2 lists): -``` +```bash - name: Merger of two lists debug: var: my_simple_list | intersect(my_simple_list_2) ``` -``` +```bash TASK [Merger of two lists] ******************************************************************************* ok: [localhost] => { "my_simple_list | intersect(my_simple_list_2)": [ @@ -364,13 +363,13 @@ ok: [localhost] => { Or on the contrary keep only the difference (the values that do not exist in the second list): -``` +```bash - name: Merger of two lists debug: var: my_simple_list | difference(my_simple_list_2) ``` -``` +```bash TASK [Merger of two lists] ******************************************************************************* ok: [localhost] => { "my_simple_list | difference(my_simple_list_2)": [ @@ -382,7 +381,7 @@ ok: [localhost] => { If your list contains non-unique values, it is also possible to filter them with the `unique` filter. -``` +```bash - name: Unique value in a list debug: var: my_simple_list | unique @@ -392,7 +391,7 @@ If your list contains non-unique values, it is also possible to filter them with You may have to import json data (from an API for example) or export data in yaml or json. -``` +```bash - name: Display a variable in yaml debug: var: my_list | to_nice_yaml(indent=4) @@ -402,7 +401,7 @@ You may have to import json data (from an API for example) or export data in yam var: my_list | to_nice_json(indent=4) ``` -``` +```bash TASK [Display a variable in yaml] ******************************************************************** ok: [localhost] => { "my_list | to_nice_yaml(indent=4)": "- element: element1\n value: value1\n- element: element2\n value: value2\n" @@ -420,13 +419,13 @@ You will quickly be confronted with errors in the execution of your playbooks if The value of a variable can be substituted for another one if it does not exist with the `default` filter: -``` +```bash - name: Default value debug: var: variablethatdoesnotexists | default(whatever) ``` -``` +```bash TASK [Default value] ******************************************************************************** ok: [localhost] => { "variablethatdoesnotexists | default(whatever)": "It's false!" @@ -435,13 +434,13 @@ ok: [localhost] => { Note the presence of the apostrophe `'` which should be protected, for example, if you were using the `shell` module: -``` +```bash - name: Default value debug: var: variablethatdoesnotexists | default(whatever| quote) ``` -``` +```bash TASK [Default value] ******************************************************************************** ok: [localhost] => { "variablethatdoesnotexists | default(whatever|quote)": "'It'\"'\"'s false!'" @@ -450,7 +449,7 @@ ok: [localhost] => { Finally, an optional variable in a module can be ignored if it does not exist with the keyword `omit` in the `default` filter, which will save you an error at runtime. -``` +```bash - name: Add a new user ansible.builtin.user: name: "{{ user_name }}" @@ -463,13 +462,13 @@ Sometimes you need to use a condition to assign a value to a variable, in which This can be avoided by using the `ternary` filter: -``` +```bash - name: Default value debug: var: (user_name == 'antoine') | ternary('admin', 'normal_user') ``` -``` +```bash TASK [Default value] ******************************************************************************** ok: [localhost] => { "(user_name == 'antoine') | ternary('admin', 'normal_user')": "admin" @@ -478,8 +477,8 @@ ok: [localhost] => { ## Some other filters - * `{{ 10000 | random }}`: as its name indicates, gives a random value. - * `{{ my_simple_list | first }}`: extracts the first element of the list. - * `{{ my_simple_list | length }}`: gives the length (of a list or a string). - * `{{ ip_list | ansible.netcommon.ipv4 }}`: only displays v4 IPs. Without dwelling on this, if you need, there are many filters dedicated to the network. - * `{{ user_password | password_hash('sha512') }}`: generates a hashed password in sha512. +* `{{ 10000 | random }}`: as its name indicates, gives a random value. +* `{{ my_simple_list | first }}`: extracts the first element of the list. +* `{{ my_simple_list | length }}`: gives the length (of a list or a string). +* `{{ ip_list | ansible.netcommon.ipv4 }}`: only displays v4 IPs. Without dwelling on this, if you need, there are many filters dedicated to the network. +* `{{ user_password | password_hash('sha512') }}`: generates a hashed password in sha512. diff --git a/docs/books/learning_ansible/08-management-server-optimizations.md b/docs/books/learning_ansible/08-management-server-optimizations.md index 368417b79f..168455728f 100644 --- a/docs/books/learning_ansible/08-management-server-optimizations.md +++ b/docs/books/learning_ansible/08-management-server-optimizations.md @@ -33,7 +33,7 @@ Gathering facts is a process that can take some time. It can be interesting to d These facts can be easily stored in a `redis` database: -``` +```bash sudo yum install redis sudo systemctl start redis sudo systemctl enable redis @@ -42,7 +42,7 @@ sudo pip3 install redis Don't forget to modify the ansible configuration: -``` +```bash fact_caching = redis fact_caching_timeout = 86400 fact_caching_connection = localhost:6379:0 @@ -50,7 +50,7 @@ fact_caching_connection = localhost:6379:0 To check the correct operation, it is enough to request the `redis` server: -``` +```bash redis-cli 127.0.0.1:6379> keys * 127.0.0.1:6379> get ansible_facts_SERVERNAME @@ -68,26 +68,26 @@ Ansible will be able to decrypt this file at runtime by retrieving the encryptio Edit the `/etc/ansible/ansible.cfg` file: -``` +```bash #vault_password_file = /path/to/vault_password_file vault_password_file = /etc/ansible/vault_pass ``` Store the password in this file `/etc/ansible/vault_pass` and assign necessary restrictive rights: -``` +```bash mysecretpassword ``` You can then encrypt your files with the command: -``` +```bash ansible-vault encrypt myfile.yml ``` A file encrypted by `ansible-vault` can be easily recognized by its header: -``` +```text $ANSIBLE_VAULT;1.1;AES256 35376532343663353330613133663834626136316234323964333735363333396136613266383966 6664322261633261356566383438393738386165333966660a343032663233343762633936313630 @@ -98,7 +98,7 @@ $ANSIBLE_VAULT;1.1;AES256 Once a file is encrypted, it can still be edited with the command: -``` +```bash ansible-vault edit myfile.yml ``` @@ -106,7 +106,7 @@ You can also deport your password storage to any password manager. For example, to retrieve a password that would be stored in the rundeck vault: -``` +```python #!/usr/bin/env python # -*- coding: utf-8 -*- import urllib.request @@ -141,13 +141,13 @@ It will be necessary to install on the management server several packages: * Via the package manager: -``` +```bash sudo dnf install python38-devel krb5-devel krb5-libs krb5-workstation ``` and configure the `/etc/krb5.conf` file to specify the correct `realms`: -``` +```bash [realms] ROCKYLINUX.ORG = { kdc = dc1.rockylinux.org @@ -159,7 +159,7 @@ ROCKYLINUX.ORG = { * Via the python package manager: -``` +```bash pip3 install pywinrm pip3 install pywinrm[credssp] pip3 install kerberos requests-kerberos @@ -169,7 +169,7 @@ pip3 install kerberos requests-kerberos Network modules usually require the `netaddr` python module: -``` +```bash sudo pip3 install netaddr ``` @@ -177,24 +177,24 @@ sudo pip3 install netaddr A tool, `ansible-cmdb` has been developed to generate a CMDB from ansible. -``` +```bash pip3 install ansible-cmdb ``` The facts must be exported by ansible with the following command: -``` +```bash ansible --become --become-user=root -o -m setup --tree /var/www/ansible/cmdb/out/ ``` You can then generate a global `json` file: -``` +```bash ansible-cmdb -t json /var/www/ansible/cmdb/out/linux > /var/www/ansible/cmdb/cmdb-linux.json ``` If you prefer a web interface: -``` +```bash ansible-cmdb -t html_fancy_split /var/www/ansible/cmdb/out/ ``` diff --git a/docs/books/learning_bash/01-first-script.md b/docs/books/learning_bash/01-first-script.md index 5cbe13a577..3be0523656 100644 --- a/docs/books/learning_bash/01-first-script.md +++ b/docs/books/learning_bash/01-first-script.md @@ -23,7 +23,7 @@ In this chapter you will learn how to write your first script in bash. :checkered_flag: **linux**, **script**, **bash** -**Knowledge**: :star: +**Knowledge**: :star: **Complexity**: :star: **Reading time**: 10 minutes @@ -46,7 +46,7 @@ The name of the script should respect some rules: The author uses the "$" throughout these lessons to indicate the user's command-prompt. -``` +```bash #!/usr/bin/env bash # # Author : Rocky Documentation Team @@ -60,14 +60,14 @@ echo "Hello world!" To be able to run this script, as an argument to bash: -``` +```bash $ bash hello-world.sh Hello world ! ``` Or, more simply, after having given it the right to execute: -``` +```bash $ chmod u+x ./hello-world.sh $ ./hello-world.sh Hello world ! @@ -83,19 +83,19 @@ Hello world ! The first line to be written in any script is to indicate the name of the shell binary to be used to execute it. If you want to use the `ksh` shell or the interpreted language `python`, you would replace the line: -``` +```bash #!/usr/bin/env bash ``` with : -``` +```bash #!/usr/bin/env ksh ``` or with : -``` +```bash #!/usr/bin/env python ``` @@ -117,7 +117,7 @@ Comments can be placed on a separate line or at the end of a line containing a c Example: -``` +```bash # This program displays the date date # This line is the line that displays the date! ``` diff --git a/docs/books/learning_bash/02-using-variables.md b/docs/books/learning_bash/02-using-variables.md index 8292f65825..7387cc19ba 100644 --- a/docs/books/learning_bash/02-using-variables.md +++ b/docs/books/learning_bash/02-using-variables.md @@ -41,7 +41,7 @@ The content of a variable can be changed during the script, as the variable cont The notion of a variable type in a shell script is possible but is very rarely used. The content of a variable is always a character or a string. -``` +```bash #!/usr/bin/env bash # @@ -76,7 +76,7 @@ By convention, variables created by a user have a name in lower case. This name The character `=` assigns content to a variable: -``` +```bash variable=value rep_name="/home" ``` @@ -85,14 +85,14 @@ There is no space before or after the `=` sign. Once the variable is created, it can be used by prefixing it with a dollar $. -``` +```bash file=file_name touch $file ``` It is strongly recommended to protect variables with quotes, as in this example below: -``` +```bash file=file name touch $file touch "$file" @@ -102,7 +102,7 @@ As the content of the variable contains a space, the first `touch` will create 2 To isolate the name of the variable from the rest of the text, you must use quotes or braces: -``` +```bash file=file_name touch "$file"1 touch ${file}1 @@ -112,7 +112,7 @@ touch ${file}1 The use of apostrophes inhibits the interpretation of special characters. -``` +```bash message="Hello" echo "This is the content of the variable message: $message" Here is the content of the variable message: Hello @@ -126,7 +126,7 @@ The `unset` command allows for the deletion of a variable. Example: -``` +```bash name="NAME" firstname="Firstname" echo "$name $firstname" @@ -140,7 +140,7 @@ The `readonly` or `typeset -r` command locks a variable. Example: -``` +```bash name="NAME" readonly name name="OTHER NAME" @@ -195,21 +195,21 @@ It is possible to store the result of a command in a variable. The syntax for sub-executing a command is as follows: -``` +```bash variable=`command` variable=$(command) # Preferred syntax ``` Example: -``` -$ day=`date +%d` -$ homedir=$(pwd) +```bash +day=`date +%d` +homedir=$(pwd) ``` With everything we've just seen, our backup script might look like this: -``` +```bash #!/usr/bin/env bash # @@ -257,13 +257,13 @@ logger "Backup of system files by ${USER} on ${HOSTNAME} in the folder ${DESTINA Running our backup script: -``` -$ sudo ./backup.sh +```bash +sudo ./backup.sh ``` will give us: -``` +```bash **************************************************************** Backup Script - Backup on desktop **************************************************************** diff --git a/docs/books/learning_bash/03-data-entry-and-manipulations.md b/docs/books/learning_bash/03-data-entry-and-manipulations.md index 7ada43699f..32b1cfb025 100644 --- a/docs/books/learning_bash/03-data-entry-and-manipulations.md +++ b/docs/books/learning_bash/03-data-entry-and-manipulations.md @@ -17,10 +17,10 @@ In this chapter you will learn how to make your scripts interact with users and **Objectives**: In this chapter you will learn how to: -:heavy_check_mark: read input from a user; -:heavy_check_mark: manipulate data entries; -:heavy_check_mark: use arguments inside a script; -:heavy_check_mark: manage positional variables; +:heavy_check_mark: read input from a user; +:heavy_check_mark: manipulate data entries; +:heavy_check_mark: use arguments inside a script; +:heavy_check_mark: manage positional variables; :checkered_flag: **linux**, **script**, **bash**, **variable** @@ -39,13 +39,13 @@ The `read` command allows you to enter a character string and store it in a vari Syntax of the read command: -``` +```bash read [-n X] [-p] [-s] [variable] ``` The first example below, prompts you for two variable inputs: "name" and "firstname", but since there is no prompt, you would have to know ahead of time that this was the case. In the case of this particular entry, each variable input would be separated by a space. The second example prompts for the variable "name" with the prompt text included: -``` +```bash read name firstname read -p "Please type your name: " name ``` @@ -56,22 +56,22 @@ read -p "Please type your name: " name | `-n` | Limits the number of characters to be entered. | | `-s` | Hides the input. | -When using the `-n` option, the shell automatically validates the input after the specified number of characters. The user does not have to press the ENTER key. +When using the `-n` option, the shell automatically validates the input after the specified number of characters. The user does not have to press the ++enter++ key. -``` +```bash read -n5 name ``` The `read` command allows you to interrupt the execution of the script while the user enters information. The user's input is broken down into words assigned to one or more predefined variables. The words are strings of characters separated by the field separator. -The end of the input is determined by pressing the ENTER key. +The end of the input is determined by pressing the ++enter++ key. Once the input is validated, each word will be stored in the predefined variable. The division of the words is defined by the field separator character. This separator is stored in the system variable `IFS` (**Internal Field Separator**). -``` +```bash set | grep IFS IFS=$' \t\n' ``` @@ -80,9 +80,9 @@ By default, the IFS contains the space, tab and line feed. When used without specifying a variable, this command simply pauses the script. The script continues its execution when the input is validated. -This is used to pause a script when debugging or to prompt the user to press ENTER to continue. +This is used to pause a script when debugging or to prompt the user to press ++enter++ to continue. -``` +```bash echo -n "Press [ENTER] to continue..." read ``` @@ -93,13 +93,13 @@ The cut command allows you to isolate a column in a file or in a stream. Syntax of the cut command: -``` +```bash cut [-cx] [-dy] [-fz] file ``` Example of use of the cut command: -``` +```bash cut -d: -f1 /etc/passwd ``` @@ -116,7 +116,7 @@ The main benefit of this command will be its association with a stream, for exam Example: -``` +```bash grep "^root:" /etc/passwd | cut -d: -f3 0 ``` @@ -131,7 +131,7 @@ The `tr` command allows you to convert a string. Syntax of the `tr` command: -``` +```bash tr [-csd] string1 string2 ``` @@ -143,22 +143,28 @@ tr [-csd] string1 string2 An example of using the `tr` command follows. If you use `grep` to return root's `passwd` file entry, you would get this: -``` +```bash grep root /etc/passwd ``` + returns: -``` + +```bash root:x:0:0:root:/root:/bin/bash ``` + Now let's use `tr` command and the reduce the "o's" in the line: -``` +```bash grep root /etc/passwd | tr -s "o" ``` + which returns this: -``` + +```bash rot:x:0:0:rot:/rot:/bin/bash ``` + ## Extract the name and path of a file The `basename` command allows you to extract the name of the file from a path. @@ -167,14 +173,17 @@ The `dirname` command allows you to extract the parent path of a file. Examples: -``` +```bash echo $FILE=/usr/bin/passwd basename $FILE ``` + Which would result in "passwd" -``` + +```bash dirname $FILE ``` + Which would result in: "/usr/bin" ## Arguments of a script @@ -193,7 +202,7 @@ Its major disadvantage is that the user will have to be warned about the syntax The arguments are filled in when the script command is entered. They are separated by a space. -``` +```bash ./script argument1 argument2 ``` @@ -214,7 +223,7 @@ These variables can be used in the script like any other variable, except that t Example: -``` +```bash #!/usr/bin/env bash # # Author : Damien dit LeDub @@ -238,7 +247,7 @@ echo "All without separation (\$@) = $@" This will give: -``` +```bash $ ./arguments.sh one two "tree four" The number of arguments ($#) = 3 The name of the script ($0) = ./arguments.sh @@ -264,7 +273,7 @@ The shift command allows you to shift positional variables. Let's modify our previous example to illustrate the impact of the shift command on positional variables: -``` +```bash #!/usr/bin/env bash # # Author : Damien dit LeDub @@ -299,7 +308,7 @@ echo "All without separation (\$@) = $@" This will give: -``` +```bash ./arguments.sh one two "tree four" The number of arguments ($#) = 3 The 1st argument ($1) = one @@ -330,13 +339,13 @@ The `set` command splits a string into positional variables. Syntax of the set command: -``` +```bash set [value] [$variable] ``` Example: -``` +```bash $ set one two three $ echo $1 $2 $3 $# one two three 3 diff --git a/docs/books/learning_bash/04-check-your-knowledge.md b/docs/books/learning_bash/04-check-your-knowledge.md index f9bf96d729..be0177e45c 100644 --- a/docs/books/learning_bash/04-check-your-knowledge.md +++ b/docs/books/learning_bash/04-check-your-knowledge.md @@ -13,7 +13,7 @@ tags: :heavy_check_mark: Among these 4 shells, which one does not exist: -- [ ] Bash +- [ ] Bash - [ ] Ksh - [ ] Tsh - [ ] Csh diff --git a/docs/books/learning_bash/05-tests.md b/docs/books/learning_bash/05-tests.md index cd81a197f5..e3f470d225 100644 --- a/docs/books/learning_bash/05-tests.md +++ b/docs/books/learning_bash/05-tests.md @@ -38,27 +38,26 @@ You should refer to the manual of the `man command` to know the different values The return code is not visible directly, but is stored in a special variable: `$?`. -``` +```bash mkdir directory echo $? 0 ``` -``` +```bash mkdir /directory mkdir: unable to create directory echo $? 1 ``` -``` +```bash command_that_does_not_exist command_that_does_not_exist: command not found echo $? 127 ``` - !!! note The display of the contents of the `$?` variable with the `echo` command is done immediately after the command you want to evaluate because this variable is updated after each execution of a command, a command line or a script. @@ -80,7 +79,7 @@ echo $? It is also possible to create return codes in a script. To do so, you just need to add a numeric argument to the `exit` command. -``` +```bash bash # to avoid being disconnected after the "exit 2 exit 123 echo $? @@ -103,13 +102,13 @@ The result of the test: Syntax of the `test` command for a file: -``` +```bash test [-d|-e|-f|-L] file ``` or: -``` +```bash [ -d|-e|-f|-L file ] ``` @@ -139,7 +138,7 @@ Options of the test command on files: Example: -``` +```bash test -e /etc/passwd echo $? 0 @@ -150,7 +149,7 @@ echo $? An internal command to some shells (including bash) that is more modern, and provides more features than the external command `test`, has been created. -``` +```bash [[ -s /etc/passwd ]] echo $? 1 @@ -164,7 +163,7 @@ echo $? It is also possible to compare two files: -``` +```bash [[ file1 -nt|-ot|-ef file2 ]] ``` @@ -178,7 +177,7 @@ It is also possible to compare two files: It is possible to test variables: -``` +```bash [[ -z|-n $variable ]] ``` @@ -191,13 +190,13 @@ It is possible to test variables: It is also possible to compare two strings: -``` +```bash [[ string1 =|!=|<|> string2 ]] ``` Example: -``` +```bash [[ "$var" = "Rocky rocks!" ]] echo $? 0 @@ -214,20 +213,20 @@ echo $? Syntax for testing integers: -``` +```bash [[ "num1" -eq|-ne|-gt|-lt "num2" ]] ``` Example: -``` +```bash var=1 [[ "$var" -eq "1" ]] echo $? 0 ``` -``` +```bash var=2 [[ "$var" -eq "1" ]] echo $? @@ -264,11 +263,11 @@ echo $? The combination of tests allows you to perform several tests in one command. It is possible to test the same argument (file, string or numeric) several times or different arguments. -``` +```bash [ option1 argument1 [-a|-o] option2 argument 2 ] ``` -``` +```bash ls -lad /etc drwxr-xr-x 142 root root 12288 sept. 20 09:25 /etc [ -d /etc -a -x /etc ] @@ -281,22 +280,21 @@ echo $? | `-a` | AND: The test will be true if all patterns are true. | | `-o` | OR: The test will be true if at least one pattern is true. | - With the internal command, it is better to use this syntax: -``` +```bash [[ -d "/etc" && -x "/etc" ]] ``` Tests can be grouped with parentheses `(` `)` to give them priority. -``` +```bash (TEST1 -a TEST2) -a TEST3 ``` The `!` character is used to perform the reverse test of the one requested by the option: -``` +```bash test -e /file # true if file exists ! test -e /file # true if file does not exist ``` @@ -305,13 +303,13 @@ test -e /file # true if file exists The `expr` command performs an operation with numeric integers. -``` +```bash expr num1 [+] [-] [\*] [/] [%] num2 ``` Example: -``` +```bash expr 2 + 2 4 ``` @@ -329,14 +327,13 @@ expr 2 + 2 | `/` | Division quotient | | `%` | Modulo of the division | - ## The `typeset` command The `typeset -i` command declares a variable as an integer. Example: -``` +```bash typeset -i var1 var1=1+1 var2=1+1 @@ -352,7 +349,7 @@ The `let` command tests if a character is numeric. Example: -``` +```bash var1="10" var2="AA" let $var1 @@ -375,7 +372,7 @@ echo $? The `let` command also allows you to perform mathematical operations: -``` +```bash let var=5+5 echo $var 10 @@ -383,7 +380,7 @@ echo $var `let` can be substituted by `$(( ))`. -``` +```bash echo $((5+2)) 7 echo $((5*2)) diff --git a/docs/books/learning_bash/06-conditional-structures.md b/docs/books/learning_bash/06-conditional-structures.md index 35c2a0da2a..56243aac4f 100644 --- a/docs/books/learning_bash/06-conditional-structures.md +++ b/docs/books/learning_bash/06-conditional-structures.md @@ -36,7 +36,7 @@ But we can use it in a condition. Syntax of the conditional alternative `if`: -``` +```bash if command then command if $?=0 @@ -52,7 +52,7 @@ Using a classical command (`mkdir`, `tar`, ...) allows you to define the actions Examples: -``` +```bash if [[ -e /etc/passwd ]] then echo "The file exists" @@ -68,7 +68,7 @@ fi If the `else` block starts with a new `if` structure, you can merge the `else` and `if` with `elif` as shown below: -``` +```bash [...] else if [[ -e /etc/ ]] @@ -99,7 +99,7 @@ The command to execute if `$?` is `true` is placed after `&&` while the command Example: -``` +```bash [[ -e /etc/passwd ]] && echo "The file exists" || echo "The file does not exist" mkdir dir && echo "The directory is created". ``` @@ -109,21 +109,26 @@ It is also possible to evaluate and replace a variable with a lighter structure This syntax implements the braces: * Displays a replacement value if the variable is empty: - ``` + + ```bash ${variable:-value} ``` + * Displays a replacement value if the variable is not empty: - ``` + + ```bash ${variable:+value} ``` + * Assigns a new value to the variable if it is empty: - ``` + + ```bash ${variable:=value} ``` Examples: -``` +```bash name="" echo ${name:-linux} linux @@ -160,7 +165,7 @@ Placed at the end of the structure, the choice `*` indicates the actions to be e Syntax of the conditional alternative case: -``` +```bash case $variable in value1) commands if $variable = value1 @@ -177,7 +182,7 @@ esac When the value is subject to variation, it is advisable to use wildcards `[]` to specify the possibilities: -``` +```bash [Yy][Ee][Ss]) echo "yes" ;; @@ -185,7 +190,7 @@ When the value is subject to variation, it is advisable to use wildcards `[]` to The character `|` also allows you to specify a value or another: -``` +```bash "yes" | "YES") echo "yes" ;; diff --git a/docs/books/learning_bash/07-loops.md b/docs/books/learning_bash/07-loops.md index 76b8a832be..dec2a79df3 100644 --- a/docs/books/learning_bash/07-loops.md +++ b/docs/books/learning_bash/07-loops.md @@ -45,7 +45,7 @@ When the evaluated command is false (`$? != 0`), the shell resumes the execution Syntax of the conditional loop structure `while`: -``` +```bash while command do command if $? = 0 @@ -54,7 +54,7 @@ done Example using the `while` conditional structure: -``` +```bash while [[ -e /etc/passwd ]] do echo "The file exists" @@ -77,13 +77,13 @@ The `exit` command ends the execution of the script. Syntax of the `exit` command : -``` +```bash exit [n] ``` Example using the `exit` command : -``` +```bash bash # to avoid being disconnected after the "exit 1 exit 1 echo $? @@ -99,7 +99,7 @@ The `break` command allows you to interrupt the loop by going to the first comma The `continue` command allows you to restart the loop by going back to the first command after `done`. -``` +```bash while [[ -d / ]]  INT ✘  17s  do echo "Do you want to continue? (yes/no)" @@ -113,7 +113,7 @@ done The `true` command always returns `true` while the `false` command always returns `false`. -``` +```bash true echo $? 0 @@ -126,7 +126,7 @@ Used as a condition of a loop, they allow for either an execution of an infinite Example: -``` +```bash while true do echo "Do you want to continue? (yes/no)" @@ -146,7 +146,7 @@ When the evaluated command is true (`$? = 0`), the shell resumes the execution o Syntax of the conditional loop structure `until`: -``` +```bash until command do command if $? != 0 @@ -155,7 +155,7 @@ done Example of the use of the conditional structure `until`: -``` +```bash until [[ -e test_until ]] do echo "The file does not exist" @@ -182,7 +182,7 @@ A `break` command is needed to exit the loop. Syntax of the conditional loop structure `select`: -``` +```bash PS3="Your choice:" select variable in var1 var2 var3 do @@ -192,7 +192,7 @@ done Example of the use of the conditional structure `select`: -``` +```bash PS3="Your choice: " select choice in coffee tea chocolate do @@ -202,7 +202,7 @@ done If this script is run, it shows something like this: -``` +```text 1) Coffee 2) Tea 3) Chocolate @@ -217,7 +217,7 @@ The `for` / `do` / `done` structure assigns the first element of the list to the Syntax of the loop structure on list of values `for`: -``` +```bash for variable in list do commands @@ -226,7 +226,7 @@ done Example of using the conditional structure `for`: -``` +```bash for file in /home /etc/passwd /root/fic.txt do file $file @@ -240,7 +240,7 @@ Any command producing a list of values can be placed after the `in` using a sub- This can be the files in a directory. In this case, the variable will take as a value each of the words of the file names present: -``` +```bash for file in $(ls -d /tmp/*) do echo $file @@ -249,7 +249,7 @@ done It can be a file. In this case, the variable will take as a value each word contained in the file browsed, from the beginning to the end: -``` +```bash cat my_file.txt first line second line @@ -265,7 +265,7 @@ line To read a file line by line, you must modify the value of the `IFS` environment variable. -``` +```bash IFS=$'\t\n' for LINE in $(cat my_file.txt); do echo $LINE; done first line diff --git a/docs/books/learning_bash/08-check-your-knowledge.md b/docs/books/learning_bash/08-check-your-knowledge.md index 15c72ea96d..dd3f0d57c4 100644 --- a/docs/books/learning_bash/08-check-your-knowledge.md +++ b/docs/books/learning_bash/08-check-your-knowledge.md @@ -13,17 +13,17 @@ tags: :heavy_check_mark: Every order must return a return code at the end of its execution: -- [ ] True +- [ ] True - [ ] False :heavy_check_mark: A return code of 0 indicates an execution error: -- [ ] True +- [ ] True - [ ] False :heavy_check_mark: The return code is stored in the variable `$@`: -- [ ] True +- [ ] True - [ ] False :heavy_check_mark: The test command allows you to: @@ -41,7 +41,7 @@ tags: :heavy_check_mark: Does the syntax of the conditional structure below seem correct to you? Explain why. -``` +```bash if command command if $?=0 else @@ -60,7 +60,7 @@ fi :heavy_check_mark: Does the syntax of the conditional alternative structure below seem correct to you? Explain why. -``` +```bash case $variable in value1) commands if $variable = value1 diff --git a/docs/books/learning_rsync/01_rsync_overview.md b/docs/books/learning_rsync/01_rsync_overview.md index 4d76dc0712..be0b6af133 100644 --- a/docs/books/learning_rsync/01_rsync_overview.md +++ b/docs/books/learning_rsync/01_rsync_overview.md @@ -5,7 +5,7 @@ contributors: Steven Spencer, Ganna Zhyrnova update : 2022-Mar-08 --- -# Backup Brief +# Backup Brief What is a backup? @@ -21,9 +21,9 @@ What are the backup methods? * Hot backup: Refers to the backup when the system is in normal operation. As the data in the system is updated at any time, the backed-up data has a certain lag relative to the real data of the system. * Remote backup: refers to backing up data in another geographic location to avoid data loss and service interruption caused by fire, natural disasters, theft, etc. -## rsync in brief +## rsync in brief -On a server, I backed up the first partition to the second partition, which is commonly known as "Local backup." The specific backup tools are `tar` , `dd` , `dump` , `cp `, etc. can be achieved. Although the data is backed up on this server, if the hardware fails to boot up properly, the data will not be retrieved. In order to solve this problem with the local backup, we introduced another kind of backup --- "remote backup". +On a server, I backed up the first partition to the second partition, which is commonly known as "Local backup." The specific backup tools are `tar` , `dd` , `dump` , `cp`, etc. can be achieved. Although the data is backed up on this server, if the hardware fails to boot up properly, the data will not be retrieved. In order to solve this problem with the local backup, we introduced another kind of backup --- "remote backup". Some people will say, can't I just use the `tar` or `cp` command on the first server and send it to the second server via `scp` or `sftp`? @@ -39,7 +39,7 @@ Therefore, there needs to be a data backup in the production environment which n In terms of platform support, most Unix-like systems are supported, whether it is GNU/Linux or BSD. In addition, there are related `rsync` under the Windows platform, such as cwRsync. -The original `rsync` was maintained by the Australian programmer Andrew Tridgell (shown in Figure 1 below), and now it has been maintained by Wayne Davison (shown in Figure 2 below) ) For maintenance, you can go to [ github project address ](https://github.com/WayneD/rsync) to get the information you want. +The original `rsync` was maintained by the Australian programmer Andrew Tridgell (shown in Figure 1 below), and now it has been maintained by Wayne Davison (shown in Figure 2 below) ) For maintenance, you can go to [github project address](https://github.com/WayneD/rsync) to get the information you want. ![ Andrew Tridgell ](images/Andrew_Tridgell.jpg) ![ Wayne Davison ](images/Wayne_Davison.jpg) @@ -48,7 +48,7 @@ The original `rsync` was maintained by the Australian programmer |push/upload|Fedora34; Fedora34-->|pull/download|RockyLinux8; ``` -## Demonstration based on SSH protocol +## Demonstration based on SSH protocol !!! tip "tip" Here, both Rocky Linux 8 and Fedora 34 use the root user to log in. Fedora 34 is the client and Rocky Linux 8 is the server. -### pull/download +### pull/download Since it is based on the SSH protocol, we first create a user in the server: @@ -90,6 +90,7 @@ total size is 0 speedup is 0.00 [root@fedora ~]# ls aabbcc ``` + The transfer was successful. !!! tip "tip" diff --git a/docs/books/learning_rsync/03_rsync_demo02.md b/docs/books/learning_rsync/03_rsync_demo02.md index d2bc334c81..24bdcb4ea0 100644 --- a/docs/books/learning_rsync/03_rsync_demo02.md +++ b/docs/books/learning_rsync/03_rsync_demo02.md @@ -6,6 +6,7 @@ update: 2021-11-04 --- # Demonstration based on rsync protocol + In vsftpd, there are virtual users (impersonated users customized by the administrator) because it is not safe to use anonymous users and local users. We know that a server based on the SSH protocol must ensure that there is a system of users. When there are many synchronization requirements, it may be necessary to create many users. This obviously does not meet the GNU/Linux operation and maintenance standards (the more users, the more insecure), in rsync, for security reasons, there is an rsync protocol authentication login method. **How to do it?** @@ -17,7 +18,7 @@ Just write the corresponding parameters and values in the configuration file. In [root@Rocky ~]# vim /etc/rsyncd.conf ``` -Some parameters and values of this file are as follows, [ here ](04_rsync_configure.md) has more parameter descriptions: +Some parameters and values of this file are as follows, [here](04_rsync_configure.md) has more parameter descriptions: |Item|Description| |---|---| @@ -91,7 +92,7 @@ aabbcc anaconda-ks.cfg fedora rsynctest.txt success! In addition to the above writing based on the rsync protocol, you can also write like this: `rsync://li@10.1.2.84/share` -## push/upload +## push/upload ```bash [root@fedora ~]# touch /root/fedora.txt diff --git a/docs/books/learning_rsync/04_rsync_configure.md b/docs/books/learning_rsync/04_rsync_configure.md index a57251e382..2d41efdb70 100644 --- a/docs/books/learning_rsync/04_rsync_configure.md +++ b/docs/books/learning_rsync/04_rsync_configure.md @@ -4,9 +4,9 @@ author : tianci li update : 2021-11-04 --- -# /etc/rsyncd.conf +# /etc/rsyncd.conf -In the previous article [ rsync demo 02 ](03_rsync_demo02.md) we introduced some basic parameters. This article is to supplement other parameters. +In the previous article [rsync demo 02](03_rsync_demo02.md) we introduced some basic parameters. This article is to supplement other parameters. |Parameters|Description| |---|---| @@ -26,6 +26,6 @@ In the previous article [ rsync demo 02 ](03_rsync_demo02.md) we introduced some | auth users = li |Enable virtual users, multiple users are separated by commas in English state| | syslog facility = daemon | Define the level of system log. These values ​​can be filled in: auth, authpriv, cron, daemon, ftp, kern, lpr, mail, news, security, syslog, user, uucp, local0, local1, local2 local3, local4, local5, local6 and local7. The default value is daemon| -## Recommended configuration +## Recommended configuration ![ photo ](images/rsync_config.jpg) diff --git a/docs/books/learning_rsync/06_rsync_inotify.md b/docs/books/learning_rsync/06_rsync_inotify.md index 0357f3ccdb..69e6c5a163 100644 --- a/docs/books/learning_rsync/06_rsync_inotify.md +++ b/docs/books/learning_rsync/06_rsync_inotify.md @@ -62,8 +62,10 @@ fs.inotify.max_user_watches = 1048576 ## Related commands The inotify-tools tool has two commands, namely: -* **inotifywait**: for continuous monitoring, real-time output results. It is generally used with the rsync incremental backup tool. Because it is a file system monitoring, it can be used with a script. We will introduce the specific script writing later. -* **inotifywatch**: for short-term monitoring, output results after the task is completed. + +* **inotifywait**: for continuous monitoring, real-time output results. It is generally used with the rsync incremental backup tool. Because it is a file system monitoring, it can be used with a script. We will introduce the specific script writing later. + +* **inotifywatch**: for short-term monitoring, output results after the task is completed. `inotifywait` mainly has the following options: diff --git a/docs/books/lxd_server/00-toc.md b/docs/books/lxd_server/00-toc.md index 563b106a25..f183d6424d 100644 --- a/docs/books/lxd_server/00-toc.md +++ b/docs/books/lxd_server/00-toc.md @@ -29,7 +29,7 @@ For those wanting to use LXD as a lab environment on their own notebooks or work * Comfort at the command line on your machine(s), and fluent in a command line editor. (Using _vi_ throughout these examples, but you can substitute in your favorite editor.) * You will need to be your unprivileged user for the bulk of these processes. For the early setup steps, you will need to be the root user or be able to `sudo` to become so. Throughout these chapters, we assume your unprivileged user to be "lxdadmin". You will have to create this user account later in the process. * For ZFS, ensure that UEFI secure boot is NOT enabled. Otherwise, you will end up having to sign the ZFS module to get it to load. -* Using Rocky Linux-based containers for the most part +* Using Rocky Linux-based containers for the most part ## Synopsis diff --git a/docs/books/lxd_server/01-install.md b/docs/books/lxd_server/01-install.md index e58c0fa20e..4face6b175 100644 --- a/docs/books/lxd_server/01-install.md +++ b/docs/books/lxd_server/01-install.md @@ -11,19 +11,19 @@ tags: # Chapter 1: Install and configuration -Throughout this chapter you will need to be the root user or you will need to be able to _sudo_ to root. +Throughout this chapter you will need to be the root user or you will need to be able to *sudo* to root. ## Install EPEL and OpenZFS repositories LXD requires the EPEL (Extra Packages for Enterprise Linux) repository, which is easy to install using: -``` +```bash dnf install epel-release ``` When installed, verify there are no updates: -``` +```bash dnf upgrade ``` @@ -33,7 +33,7 @@ If there were any kernel updates during the upgrade process, reboot the server. Install the OpenZFS repository with: -``` +```bash dnf install https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").noarch.rpm ``` @@ -41,19 +41,19 @@ dnf install https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").n LXD installation requires a snap package on Rocky Linux. For this reason, you need to install `snapd` (and a few other useful programs) with: -``` +```bash dnf install snapd dkms vim kernel-devel ``` Now enable and start snapd: -``` +```bash systemctl enable snapd ``` Then run: -``` +```bash systemctl start snapd ``` @@ -63,13 +63,13 @@ Reboot the server before continuing here. Installing LXD requires the use of the snap command. At this point, you are just installing it, you are not doing the set up: -``` +```bash snap install lxd ``` -## Install OpenZFS +## Install OpenZFS -``` +```bash dnf install zfs ``` @@ -83,13 +83,13 @@ Luckily, tweaking the settings for LXD is not hard with a few file modifications The first file you need to change is the `limits.conf` file. This file is self-documented. Examine the explanations in the comment in the file to understand what this file does. To make your modifications enter: -``` +```bash vi /etc/security/limits.conf ``` This entire file consists of comments, and at the bottom, shows the current default settings. In the blank space above the end of file marker (#End of file) you need to add our custom settings. The end of the file will look like this when completed: -``` +```text # Modifications made for LXD * soft nofile 1048576 @@ -100,15 +100,15 @@ root hard nofile 1048576 * hard memlock unlimited ``` -Save your changes and exit. (SHIFT+:+wq! for _vi_) +Save your changes and exit. (++shift+colon+"w"+"q"+"exclam++ for *vi*) ### Modifying sysctl.conf with `90-lxd.override.conf` -With _systemd_, you can make changes to your system's overall configuration and kernel options *without* modifying the main configuration file. Instead, put your settings in a separate file that will override the particular settings you need. +With *systemd*, you can make changes to your system's overall configuration and kernel options *without* modifying the main configuration file. Instead, put your settings in a separate file that will override the particular settings you need. To make these kernel changes, you are going to create a file called `90-lxd-override.conf` in `/etc/sysctl.d`. To do this type: -``` +```bash vi /etc/sysctl.d/90-lxd-override.conf ``` @@ -118,7 +118,7 @@ vi /etc/sysctl.d/90-lxd-override.conf Place the following content in that file. Note that if you are wondering what you are doing here, the file content is self-documenting: -``` +```bash ## The following changes have been made for LXD ## # fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance @@ -176,19 +176,19 @@ Save your changes and exit. At this point reboot the server. -### Checking _sysctl.conf_ values +### Checking *sysctl.conf* values After the reboot, log back in as the root user to the server. You need to check that our override file has actually completed the job. This is not hard to do. There's no need to verify every setting unless you want to, but checking a few will verify that the settings have changed. Do this with the `sysctl` command: -``` +```bash sysctl net.core.bpf_jit_limit ``` Which will show you: -``` +```bash net.core.bpf_jit_limit = 3000000000 ``` diff --git a/docs/books/lxd_server/02-zfs_setup.md b/docs/books/lxd_server/02-zfs_setup.md index 20f378e6bd..9fca62c5ce 100644 --- a/docs/books/lxd_server/02-zfs_setup.md +++ b/docs/books/lxd_server/02-zfs_setup.md @@ -19,7 +19,7 @@ If you have already installed ZFS, this section will walk you through ZFS setup. First, enter this command: -``` +```bash /sbin/modprobe zfs ``` @@ -27,13 +27,13 @@ If there are no errors, it will return to the prompt and echo nothing. If you ge Next you need to examine the disks on our system, find out where the operating system is, and what is available to use for the ZFS pool. You will do this with `lsblk`: -``` +```bash lsblk ``` Which will return something like this (your system will be different!): -``` +```bash AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 32.3M 1 loop /var/lib/snapd/snap/snapd/11588 loop1 7:1 0 55.5M 1 loop /var/lib/snapd/snap/core18/1997 @@ -55,7 +55,7 @@ In this listing, you can see that */dev/sda* is in use by the operating system. That falls outside the scope of this document, but definitely is a consideration for production. It offers better performance and redundancy. For now, create your pool on the single device you have identified: -``` +```bash zpool create storage /dev/sdb ``` diff --git a/docs/books/lxd_server/03-lxdinit.md b/docs/books/lxd_server/03-lxdinit.md index eeb2dcd041..40305ba8ae 100644 --- a/docs/books/lxd_server/03-lxdinit.md +++ b/docs/books/lxd_server/03-lxdinit.md @@ -18,50 +18,50 @@ Throughout this chapter you will need to be root or able to `sudo` to become roo Your server environment is all set up. You are ready to initialize LXD. This is an automated script that asks a series of questions to get your LXD instance up and running: -``` +```bash lxd init ``` Here are the questions and our answers for the script, with a little explanation where warranted: -``` +```text Would you like to use LXD clustering? (yes/no) [default=no]: ``` If interested in clustering, do some additional research on that [here](https://documentation.ubuntu.com/lxd/en/latest/clustering/) -``` +```text Do you want to configure a new storage pool? (yes/no) [default=yes]: ``` This seems counter-intuitive. You have already created your ZFS pool, but it will become clear in a later question. Accept the default. -``` +```text Name of the new storage pool [default=default]: storage ``` Leaving this "default" is an option, but for clarity, using the same name you gave our ZFS pool is better. -``` +```text Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: ``` You want to accept the default. -``` +```text Create a new ZFS pool? (yes/no) [default=yes]: no ``` Here is where the resolution of the earlier question about creating a storage pool comes into play. -``` +```text Name of the existing ZFS pool or dataset: storage Would you like to connect to a MAAS server? (yes/no) [default=no]: ``` Metal As A Service (MAAS) is outside the scope of this document. -``` +```text Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: @@ -70,13 +70,13 @@ What IPv6 address should be used? (CIDR subnet notation, “auto” or “none If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you. -``` +```text Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes ``` This is necessary to snapshot the server. -``` +```text Address to bind LXD to (not including port) [default=all]: Port to bind LXD to [default=8443]: Trust password for new clients: @@ -85,7 +85,7 @@ Again: This trust password is how you will connect to the snapshot server or back from the snapshot server. Set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager. -``` +```text Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ``` @@ -94,13 +94,13 @@ Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: Before you continue on, you need to create your "lxdadmin" user and ensure that it has the privileges it needs. You need the "lxdadmin" user to be able to `sudo` to root and you need it to be a member of the lxd group. To add the user and ensure it is a member of both groups do: -``` +```bash useradd -G wheel,lxd lxdadmin ``` Set the password: -``` +```bash passwd lxdadmin ``` diff --git a/docs/books/lxd_server/04-firewall.md b/docs/books/lxd_server/04-firewall.md index 6370644e22..cdf389652f 100644 --- a/docs/books/lxd_server/04-firewall.md +++ b/docs/books/lxd_server/04-firewall.md @@ -25,13 +25,13 @@ As with any server, you need to ensure that it is secure from the outside world For _firewalld_ rules, you need to use [this basic procedure](../../guides/security/firewalld.md) or be familiar with those concepts. Our assumptions are: LAN network of 192.168.1.0/24 and a bridge named lxdbr0. To be clear, you might have many interfaces on your LXD server, with one perhaps facing your WAN. You are also going to create a zone for the bridged and local networks. This is just for zone clarity's sake. The other zone names do not really apply. This procedure assumes that you already know the basics of _firewalld_. -``` +```bash firewall-cmd --new-zone=bridge --permanent ``` You need to reload the firewall after adding a zone: -``` +```bash firewall-cmd --reload ``` @@ -45,18 +45,20 @@ You want to allow all traffic from the bridge. Just add the interface, and chang If you need to create a zone that you want to allow all access to the interface or source, but do not want to have to specify any protocols or services, then you *must* change the target from "default" to "ACCEPT". The same is true of "DROP" and "REJECT" for a particular IP block that you have custom zones for. To be clear, the "drop" zone will take care of that for you as long as you are not using a custom zone. -``` +```bash firewall-cmd --zone=bridge --add-interface=lxdbr0 --permanent firewall-cmd --zone=bridge --set-target=ACCEPT --permanent ``` + Assuming no errors and everything is still working just do a reload: -``` +```bash firewall-cmd --reload ``` + If you list out your rules now with `firewall-cmd --zone=bridge --list-all` you will see: -``` +```bash bridge (active) target: ACCEPT icmp-block-inversion: no @@ -72,22 +74,25 @@ bridge (active) icmp-blocks: rich rules: ``` + Note that you also want to allow your local interface. Again, the included zones are not appropriately named for this. Create a zone and use the source IP range for the local interface to ensure you have access: -``` +```bash firewall-cmd --new-zone=local --permanent firewall-cmd --reload ``` + Add the source IPs for the local interface, and change the target to "ACCEPT": -``` +```bash firewall-cmd --zone=local --add-source=127.0.0.1/8 --permanent firewall-cmd --zone=local --set-target=ACCEPT --permanent firewall-cmd --reload ``` + Go ahead and list out the "local" zone to ensure your rules are there with `firewall-cmd --zone=local --list all` which will show: -``` +```bash local (active) target: ACCEPT icmp-block-inversion: no @@ -106,23 +111,26 @@ local (active) You want to allow SSH from our trusted network. We will use the source IPs here, and the built-in "trusted" zone. The target for this zone is already "ACCEPT" by default. -``` +```bash firewall-cmd --zone=trusted --add-source=192.168.1.0/24 ``` + Add the service to the zone: -``` +```bash firewall-cmd --zone=trusted --add-service=ssh ``` + If everything is working, move your rules to permanent and reload the rules: -``` +```bash firewall-cmd --runtime-to-permanent firewall-cmd --reload ``` + Listing out your "trusted" zone will show: -``` +```bash trusted (active) target: ACCEPT icmp-block-inversion: no @@ -141,13 +149,13 @@ trusted (active) By default, the "public" zone is in the enabled state and has SSH allowed. For security, you do not want SSH allowed on the "public" zone. Ensure that your zones are correct and that the access you are getting to the server is by one of the LAN IPs (in the case of our example). You might lock yourself out of the server if you do not verify this before continuing. When you are sure you have access from the correct interface, remove SSH from the "public" zone: -``` +```bash firewall-cmd --zone=public --remove-service=ssh ``` Test access and ensure you are not locked out. If not, move your rules to permanent, reload, and list out zone "public" to ensure the removal of SSH: -``` +```bash firewall-cmd --runtime-to-permanent firewall-cmd --reload firewall-cmd --zone=public --list-all diff --git a/docs/books/lxd_server/05-lxd_images.md b/docs/books/lxd_server/05-lxd_images.md index 29a0c908e6..2fc1ed2325 100644 --- a/docs/books/lxd_server/05-lxd_images.md +++ b/docs/books/lxd_server/05-lxd_images.md @@ -17,7 +17,7 @@ Throughout this chapter you will need to run commands as your unprivileged user You probably can not wait to get started with a container. There are many container operating system possibilities. To get a feel for how many possibilities, enter this command: -``` +```bash lxc image list images: | more ``` @@ -25,13 +25,13 @@ Enter the space bar to page through the list. This list of containers and virtua The **last** thing you want to do is to page through looking for a container image to install, particularly if you know the image that you want to create. Change the command to show only Rocky Linux install options: -``` +```bash lxc image list images: | grep rocky ``` This brings up a much more manageable list: -``` +```bash | rockylinux/8 (3 more) | 0ed2f148f7c6 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | CONTAINER | 128.68MB | Aug 5, 2022 at 12:00am (UTC) | | rockylinux/8 (3 more) | 6411a033fdf1 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | VIRTUAL-MACHINE | 643.15MB | Aug 5, 2022 at 12:00am (UTC) | | rockylinux/8/arm64 (1 more) | e677777306cf | yes | Rockylinux 8 arm64 (20220805_02:29) | aarch64 | CONTAINER | 124.06MB | Aug 5, 2022 at 12:00am (UTC) | @@ -50,7 +50,7 @@ This brings up a much more manageable list: For the first container, you are going to use "rockylinux/8". To install it, you *might* use: -``` +```bash lxc launch images:rockylinux/8 rockylinux-test-8 ``` @@ -58,19 +58,19 @@ That will create a Rocky Linux-based container named "rockylinux-test-8". You ca To start the container manually, use: -``` +```bash lxc start rockylinux-test-8 ``` To Rename the image (we are not going to do this here, but this is how to do it) first stop the container: -``` +```bash lxc stop rockylinux-test-8 ``` Use the `move` command to change the container's name: -``` +```bash lxc move rockylinux-test-8 rockylinux-8 ``` @@ -78,25 +78,25 @@ If you followed this instruction anyway, stop the container and move it back to For the purposes of this guide, go ahead and install two more images for now: -``` +```bash lxc launch images:rockylinux/9 rockylinux-test-9 ``` and -``` +```bash lxc launch images:ubuntu/22.04 ubuntu-test ``` Examine what you have by listing your images: -``` +```bash lxc list ``` which will return this: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -106,6 +106,4 @@ which will return this: +-------------------+---------+----------------------+------+-----------+-----------+ | ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | +-------------------+---------+----------------------+------+-----------+-----------+ - ``` - diff --git a/docs/books/lxd_server/06-profiles.md b/docs/books/lxd_server/06-profiles.md index 251d583f47..a107a01f85 100644 --- a/docs/books/lxd_server/06-profiles.md +++ b/docs/books/lxd_server/06-profiles.md @@ -29,7 +29,7 @@ For now, just be aware that this has drawbacks when choosing container images ba To create our macvlan profile, use this command: -``` +```bash lxc profile create macvlan ``` @@ -37,13 +37,13 @@ If you were on a multi-interface machine and wanted more than one macvlan templa You want to change the macvlan interface, but before you do, you need to know what the parent interface is for our LXD server. This will be the interface that has a LAN (in this case) assigned IP. To find what interface that is, use: -``` +```bash ip addr ``` Look for the interface with the LAN IP assignment in the 192.168.1.0/24 network: -``` +```bash 2: enp3s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 40:16:7e:a9:94:85 brd ff:ff:ff:ff:ff:ff inet 192.168.1.106/24 brd 192.168.1.255 scope global dynamic noprefixroute enp3s0 @@ -56,7 +56,7 @@ In this case, the interface is "enp3s0". Next change the profile: -``` +```bash lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp3s0 ``` @@ -64,14 +64,13 @@ This command adds all of the necessary parameters to the macvlan profile require Examine what this command created, by using the command: -``` +```bash lxc profile show macvlan ``` Which will give you output similar to this: - -``` +```bash config: {} description: "" devices: @@ -87,13 +86,13 @@ You can use profiles for many other things, but assigning a static IP to a conta To assign the macvlan profile to rockylinux-test-8 you need to do the following: -``` +```bash lxc profile assign rockylinux-test-8 default,macvlan ``` Do the same thing for rockylinux-test-9: -``` +```bash lxc profile assign rockylinux-test-9 default,macvlan ``` @@ -101,7 +100,7 @@ This says, you want the default profile, and to apply the macvlan profile too. ## Rocky Linux macvlan -In RHEL distributions and clones, Network Manager has been in a constant state of change. Because of this, the way the `macvlan` profile works does not work (at least in comparison to other distributions), and requires a little additional work to assign IP addresses from DHCP or statically. +In RHEL distributions and clones, Network Manager has been in a constant state of change. Because of this, the way the `macvlan` profile works does not work (at least in comparison to other distributions), and requires a little additional work to assign IP addresses from DHCP or statically. Remember that none of this has anything to do with Rocky Linux particularly, but with the upstream package implementation. @@ -115,18 +114,18 @@ Having the profile assigned, however, does not change the default configuration, To test this, do the following: -``` +```bash lxc restart rocky-test-8 lxc restart rocky-test-9 ``` List your containers again and note that the rockylinux-test-9 does not have an IP address anymore: -``` +```bash lxc list ``` -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -136,19 +135,19 @@ lxc list +-------------------+---------+----------------------+------+-----------+-----------+ | ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 | +-------------------+---------+----------------------+------+-----------+-----------+ - ``` + As you can see, our Rocky Linux 8.x container received the IP address from the LAN interface, whereas the Rocky Linux 9.x container did not. To further demonstrate the problem here, you need to run `dhclient` on the Rocky Linux 9.0 container. This will show us that the macvlan profile, *is* in fact applied: -``` +```bash lxc exec rockylinux-test-9 dhclient ``` Another container listing now shows the following: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -162,51 +161,51 @@ Another container listing now shows the following: That should have happened with a stop and start of the container, but it does not. Assuming that you want to use a DHCP assigned IP address every time, you can fix this with a simple crontab entry. To do this, we need to gain shell access to the container by entering: -``` +```bash lxc exec rockylinux-test-9 bash ``` Next, lets determine the path to `dhclient`. To do this, because this container is from a minimal image, you will need to first install `which`: -``` +```bash dnf install which ``` then run: -``` +```bash which dhclient ``` which will return: -``` +```bash /usr/sbin/dhclient ``` Next, change root's crontab: -``` +```bash crontab -e ``` Add this line: -``` +```bash @reboot /usr/sbin/dhclient ``` -The crontab command entered uses _vi_ . To save your changes and exit use SHIFT+:+wq. +The crontab command entered uses *vi* . To save your changes and exit use ++shift+colon+"w"+"q"++. Exit the container and restart rockylinux-test-9: -``` +```bash lxc restart rockylinux-test-9 ``` Another listing will reveal that the container has the DHCP address assigned: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -225,19 +224,19 @@ To statically assign an IP address, things get even more convoluted. Since `netw To do this, you need to gain shell access to the container again: -``` +```bash lxc exec rockylinux-test-9 bash ``` Next, you are going to create a bash script in `/usr/local/sbin` called "static": -``` +```bash vi /usr/local/sbin/static ``` The contents of this script are not difficult: -``` +```bash #!/usr/bin/env bash /usr/sbin/ip link set dev eth0 name net0 @@ -246,41 +245,40 @@ The contents of this script are not difficult: /usr/sbin/ip route add default via 192.168.1.1 ``` -What are we doing here? +What are we doing here? * you rename eth0 to a new name that we can manage ("net0") * you assign the new static IP that we have allocated for our container (192.168.1.151) * you bring up the new "net0" interface * you need to add the default route for our interface - Make our script executable with: -``` +```bash chmod +x /usr/local/sbin/static ``` Add this to root's crontab for the container with the @reboot time: -``` +```bash @reboot /usr/local/sbin/static ``` Finally, exit the container and restart it: -``` +```bash lxc restart rockylinux-test-9 ``` Wait a few seconds and list out the containers again: -``` +```bash lxc list ``` You should see success: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -298,19 +296,19 @@ Luckily, In Ubuntu's implementation of Network Manager, the macvlan stack is NOT Just like with your rockylinux-test-9 container, you need to assign the profile to our container: -``` +```bash lxc profile assign ubuntu-test default,macvlan ``` To find out if DHCP assigns an address to the container stop and start the container again: -``` +```bash lxc restart ubuntu-test ``` List the containers again: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ @@ -326,13 +324,13 @@ Success! Configuring the Static IP is just a little different, but not at all hard. You need to change the .yaml file associated with the container's connection (`10-lxc.yaml`). For this static IP, you will use 192.168.1.201: -``` +```bash vi /etc/netplan/10-lxc.yaml ``` Change what is there to the following: -``` +```bash network: version: 2 ethernets: @@ -348,13 +346,13 @@ Save your changes and exit the container. Restart the container: -``` +```bash lxc restart ubuntu-test ``` When you list your containers again, you will see your static IP: -``` +```bash +-------------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+----------------------+------+-----------+-----------+ diff --git a/docs/books/lxd_server/07-configurations.md b/docs/books/lxd_server/07-configurations.md index df19661747..085b4e8cca 100644 --- a/docs/books/lxd_server/07-configurations.md +++ b/docs/books/lxd_server/07-configurations.md @@ -15,13 +15,13 @@ Throughout this chapter you will need to run commands as your unprivileged user There are a wealth of options for configuring the container after installation. Before seeing those, however, let us examine the `info` command for a container. In this example, you will use the ubuntu-test container: -``` +```bash lxc info ubuntu-test ``` This will show the following: -``` +```bash Name: ubuntu-test Location: none Remote: unix:// @@ -60,7 +60,7 @@ Resources: There is much good information there, from the profiles applied, to the memory in use, disk space in use, and more. -### A word about configuration and some options +## A word about configuration and some options By default, LXD will assign the required system memory, disk space, CPU cores, and other resources, to the container. But what if you want to be more specific? That is totally possible. @@ -70,29 +70,29 @@ Just remember that every action you make to configure a container _can_ have neg Rather than run through all of the options for configuration, use the tab auto-complete to see the options available: -``` +```bash lxc config set ubuntu-test ``` -and TAB. +and ++tab++. This shows you all of the options for configuring a container. If you have questions about what one of the configuration options does, head to the [official documentation for LXD](https://documentation.ubuntu.com/lxd/en/latest/config-options/) and do a search for the configuration parameter, or Google the entire string, such as `lxc config set limits.memory` and examine the results of the search. Here we examine a few of the most used configuration options. For example, if you want to set the max amount of memory that a container can use: -``` +```bash lxc config set ubuntu-test limits.memory 2GB ``` That says that if the memory is available to use, for example there is 2GB of memory available, then the container can actually use more than 2GB if it is available. It is a soft limit, for example. -``` +```bash lxc config set ubuntu-test limits.memory.enforce 2GB ``` That says that the container can never use more than 2GB of memory, whether it is currently available or not. In this case it is a hard limit. -``` +```bash lxc config set ubuntu-test limits.cpu 2 ``` @@ -104,14 +104,13 @@ That says to limit the number of CPU cores that the container can use to 2. Remember when you set up our storage pool in the ZFS chapter? You named the pool "storage," but you could have named it anything. If you want to examine this, you can use this command, which works equally well for any of the other pool types too (as shown for dir): -``` +```bash lxc storage show storage ``` - This shows the following: -``` +```bash config: source: /var/snap/lxd/common/lxd/storage-pools/storage description: "" @@ -129,11 +128,10 @@ locations: This shows that all of our containers use our dir storage pool. When using ZFS, you can also set a disk quota on a container. Here is what that command looks like, setting a 2GB disk quota on the ubuntu-test container: -``` +```bash lxc config device override ubuntu-test root size=2GB ``` As stated earlier, use configuration options sparingly, unless you have got a container that wants to use way more than its share of resources. LXD, for the most part, will manage the environment well on its own. Many more options exist that might be of interest to some people. Doing your own research will help you to find out if any of those are of value in your environment. - diff --git a/docs/books/lxd_server/08-snapshots.md b/docs/books/lxd_server/08-snapshots.md index 5e5b627510..a21f4ce319 100644 --- a/docs/books/lxd_server/08-snapshots.md +++ b/docs/books/lxd_server/08-snapshots.md @@ -17,25 +17,25 @@ Container snapshots, along with a snapshot server (more on that later), are prob The author used LXD containers for PowerDNS public facing servers, and the process of updating those applications became less worrisome, thanks to taking snapshots before every update. -You can even snapshot a container when it is running. +You can even snapshot a container when it is running. ## The snapshot process Start by getting a snapshot of the ubuntu-test container by using this command: -``` +```bash lxc snapshot ubuntu-test ubuntu-test-1 ``` Here, you are calling the snapshot "ubuntu-test-1", but you can call it anything. To ensure that you have the snapshot, do an `lxc info` of the container: -``` +```bash lxc info ubuntu-test ``` You have looked at an info screen already. If you scroll to the bottom, you now see: -``` +```bash Snapshots: ubuntu-test-1 (taken at 2021/04/29 15:57 UTC) (stateless) ``` @@ -44,13 +44,13 @@ Success! Our snapshot is in place. Get into the ubuntu-test container: -``` +```bash lxc exec ubuntu-test bash ``` Create an empty file with the _touch_ command: -``` +```bash touch this_file.txt ``` @@ -58,19 +58,19 @@ Exit the container. Before restoring the container how it was prior to creating the file, the safest way to restore a container, particularly if there have been many changes, is to stop it first: -``` +```bash lxc stop ubuntu-test ``` Restore it: -``` +```bash lxc restore ubuntu-test ubuntu-test-1 ``` Start the container again: -``` +```bash lxc start ubuntu-test ``` @@ -78,7 +78,7 @@ If you get back into the container again and look, our "this_file.txt" that you When you do not need a snapshot anymore you can delete it: -``` +```bash lxc delete ubuntu-test/ubuntu-test-1 ``` @@ -94,7 +94,7 @@ lxc delete ubuntu-test/ubuntu-test-1 So always delete snapshots with the container running. -In the chapters that follow you will: +In the chapters that follow you will: * set up the process of creating snapshots automatically * set up expiration of a snapshot so that it goes away after a certain length of time diff --git a/docs/books/lxd_server/09-snapshot_server.md b/docs/books/lxd_server/09-snapshot_server.md index a64bbecded..b816d6047a 100644 --- a/docs/books/lxd_server/09-snapshot_server.md +++ b/docs/books/lxd_server/09-snapshot_server.md @@ -17,7 +17,7 @@ As noted at the beginning, the snapshot server for LXD must be a mirror of the p The process of building the snapshot server is exactly like the production server. To fully emulate our production server set up, do all of **Chapters 1-4** again on the snapshot server, and when completed, return to this spot. -You are back!! Congratulations, this must mean that you have successfully completed the basic installation for the snapshot server. +You are back!! Congratulations, this must mean that you have successfully completed the basic installation for the snapshot server. ## Setting up the primary and snapshot server relationship @@ -27,38 +27,38 @@ In our lab, we do not have that luxury. Perhaps you've got the same scenario run In our lab, the primary LXD server is running on 192.168.1.106 and the snapshot LXD server is running on 192.168.1.141. SSH into each server and add the following to the /etc/hosts file: -``` +```bash 192.168.1.106 lxd-primary 192.168.1.141 lxd-snapshot ``` Next, you need to allow all traffic between the two servers. To do this, you are going to change the `firewalld` rules. First, on the lxd-primary server, add this line: -``` +```bash firewall-cmd zone=trusted add-source=192.168.1.141 --permanent ``` and on the snapshot server, add this rule: -``` +```bash firewall-cmd zone=trusted add-source=192.168.1.106 --permanent ``` then reload: -``` +```bash firewall-cmd reload ``` Next, as our unprivileged (lxdadmin) user, you need to set the trust relationship between the two machines. This is done by running the following on lxd-primary: -``` +```bash lxc remote add lxd-snapshot ``` This displays the certificate to accept. Accept it, and it will prompt for your password. This is the "trust password" that you set up when doing the LXD initialization step. Hopefully, you are securely keeping track of all of these passwords. When you enter the password, you will receive this: -``` +```bash Client certificate stored at server: lxd-snapshot ``` @@ -70,31 +70,31 @@ Before you can migrate your first snapshot, you need to have any profiles create You will need to create this for lxd-snapshot. Go back to [Chapter 6](06-profiles.md) and create the "macvlan" profile on lxd-snapshot if you need to. If your two servers have the same parent interface names ("enp3s0" for example) then you can copy the "macvlan" profile over to lxd-snapshot without recreating it: -``` +```bash lxc profile copy macvlan lxd-snapshot ``` With all of the relationships and profiles set up, the next step is to actually send a snapshot from lxd-primary over to lxd-snapshot. If you have been following along exactly, you have probably deleted all of your snapshots. Create another snapshot: -``` +```bash lxc snapshot rockylinux-test-9 rockylinux-test-9-snap1 ``` If you run the "info" command for `lxc`, you can see the snapshot at the bottom of our listing: -``` +```bash lxc info rockylinux-test-9 ``` Which will show something like this at the bottom: -``` +```bash rockylinux-test-9-snap1 at 2021/05/13 16:34 UTC) (stateless) ``` OK, fingers crossed! Let us try to migrate our snapshot: -``` +```bash lxc copy rockylinux-test-9/rockylinux-test-9-snap1 lxd-snapshot:rockylinux-test-9 ``` @@ -102,7 +102,7 @@ This command says, within the container rockylinux-test-9, you want to send the After a short time, the copy will be complete. Want to find out for sure? Do an `lxc list` on the lxd-snapshot server. Which should return the following: -``` +```bash +-------------------+---------+------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------------+---------+------+------+-----------+-----------+ @@ -112,13 +112,13 @@ After a short time, the copy will be complete. Want to find out for sure? Do an Success! Try starting it. Because we are starting it on the lxd-snapshot server, you need to stop it first on the lxd-primary server to avoid an IP address conflict: -``` +```bash lxc stop rockylinux-test-9 ``` And on the lxd-snapshot server: -``` +```bash lxc start rockylinux-test-9 ``` @@ -128,9 +128,9 @@ Assuming all of this works without error, stop the container on lxd-snapshot and The snapshots copied to lxd-snapshot will be down when they migrate, but if you have a power event or need to reboot the snapshot server because of updates or something, you will end up with a problem. Those containers will try to start on the snapshot server creating a potential IP address conflict. -To eliminate this, you need to set the migrated containers so that they will not start on reboot of the server. For our newly copied rockylinux-test-9 container, you will do this with: +To eliminate this, you need to set the migrated containers so that they will not start on reboot of the server. For our newly copied rockylinux-test-9 container, you will do this with: -``` +```bash lxc config set rockylinux-test-9 boot.autostart 0 ``` @@ -142,7 +142,7 @@ It is great that you can create snapshots when you need to, and sometimes you _d The first thing you need to do is schedule a process to automate snapshot creation on lxd-primary. You will do this for each container on the lxd-primary server. When completed, it will take care of this going forward. You do this with the following syntax. Note the similarities to a crontab entry for the timestamp: -``` +```bash lxc config set [container_name] snapshots.schedule "50 20 * * *" ``` @@ -150,18 +150,18 @@ What this is saying is, do a snapshot of the container name every day at 8:50 PM To apply this to our rockylinux-test-9 container: -``` +```bash lxc config set rockylinux-test-9 snapshots.schedule "50 20 * * *" ``` You also want to set up the name of the snapshot to be meaningful by our date. LXD uses UTC everywhere, so our best bet to keep track of things, is to set the snapshot name with a date and time stamp that is in a more understandable format: -``` +```bash lxc config set rockylinux-test-9 snapshots.pattern "rockylinux-test-9{{ creation_date|date:'2006-01-02_15-04-05' }}" ``` GREAT, but you certainly do not want a new snapshot every day without getting rid of an old one, right? You would fill up the drive with snapshots. To fix this you run: -``` +```bash lxc config set rockylinux-test-9 snapshots.expiry 1d ``` diff --git a/docs/books/lxd_server/10-automating.md b/docs/books/lxd_server/10-automating.md index 9e46f68bb2..4d43bc0e21 100644 --- a/docs/books/lxd_server/10-automating.md +++ b/docs/books/lxd_server/10-automating.md @@ -13,20 +13,19 @@ tags: Throughout this chapter you will need to be root or able to `sudo` to become root. -Automating the snapshot process makes things a whole lot easier. +Automating the snapshot process makes things a whole lot easier. ## Automating the snapshot copy process - Perform this process on lxd-primary. First thing you need to do is create a script that will run by a cron in /usr/local/sbin called "refresh-containers" : -``` +```bash sudo vi /usr/local/sbin/refreshcontainers.sh ``` The script is pretty minimal: -``` +```bash #!/bin/bash # This script is for doing an lxc copy --refresh against each container, copying # and updating them to the snapshot server. @@ -40,25 +39,25 @@ for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv) Make it executable: -``` +```bash sudo chmod +x /usr/local/sbin/refreshcontainers.sh ``` Change the ownership of this script to your lxdadmin user and group: -``` +```bash sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh ``` Set up the crontab for the lxdadmin user to run this script, in this case at 10 PM: -``` +```bash crontab -e ``` Your entry will look like this: -``` +```bash 00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1 ``` @@ -68,6 +67,6 @@ This will create a log in lxdadmin's home directory called "refreshlog" which wi The automated procedure will fail sometimes. This generally happens when a particular container fails to refresh. You can manually re-run the refresh with the following command (assuming rockylinux-test-9 here, is our container): -``` +```bash lxc copy --refresh rockylinux-test-9 lxd-snapshot:rockylinux-test-9 ``` diff --git a/docs/books/lxd_server/30-appendix_a.md b/docs/books/lxd_server/30-appendix_a.md index 7754989b80..fa3e376498 100644 --- a/docs/books/lxd_server/30-appendix_a.md +++ b/docs/books/lxd_server/30-appendix_a.md @@ -24,25 +24,25 @@ While not a part of the chapters for an LXD Server, this procedure will help tho From the command line, install the EPEL repository: -``` +```bash sudo dnf install epel-release ``` When installation finishes, do an upgrade: -``` +```bash sudo dnf upgrade ``` Install `snapd` -``` +```bash sudo dnf install snapd ``` Enable the `snapd` service -``` +```bash sudo systemctl enable snapd ``` @@ -50,48 +50,48 @@ Reboot your notebook or workstation Install the snap for LXD: -``` +```bash sudo snap install lxd ``` ## LXD initialization -If you have looked through the production server chapters, this is nearly the same as the production server init procedure. +If you have looked through the production server chapters, this is nearly the same as the production server init procedure. -``` +```bash sudo lxd init ``` -This will start a question and answer dialog. +This will start a question and answer dialog. Here are the questions and our answers for the script, with a little explanation where warranted: -``` +```text Would you like to use LXD clustering? (yes/no) [default=no]: ``` If you have interest in clustering, do some additional research on that [at Linux containers here](https://documentation.ubuntu.com/lxd/en/latest/clustering/). -``` +```text Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: storage ``` -Optionally, you can accept the default. +Optionally, you can accept the default. -``` +```text Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: dir ``` Note that `dir` is somewhat slower than `btrfs`. If you have the foresight to leave a disk empty, you can use that device (example: /dev/sdb) for the `btrfs` device and then select `btrfs`, but only if your host computer has an operating system that supports `btrfs`. Rocky Linux and any RHEL clone will not support `btrfs` - not yet, anyway. `dir` will work fine for a lab environment. -``` +```text Would you like to connect to a MAAS server? (yes/no) [default=no]: ``` Metal As A Service (MAAS) is outside the scope of this document. -``` +```text Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: @@ -100,13 +100,13 @@ What IPv6 address should be used? (CIDR subnet notation, “auto” or “none If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you. -``` +```text Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes ``` This is necessary to snapshot the workstation. Answer "yes" here. -``` +```text Address to bind LXD to (not including port) [default=all]: Port to bind LXD to [default=8443]: Trust password for new clients: @@ -115,7 +115,7 @@ Again: This trust password is how you will connect to the snapshot server or back from the snapshot server. Set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager. -``` +```text Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ``` @@ -124,7 +124,7 @@ Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: The next thing you need to do is to add your user to the lxd group. Again, you will need to use `sudo` or be root for this: -``` +```text sudo usermod -a -G lxd [username] ``` @@ -136,13 +136,13 @@ At this point, you have made a bunch of changes. Before you go any further, rebo To ensure that `lxd` started and that your user has privileges, from the shell prompt do: -``` +```text lxc list ``` Note you have not used `sudo` here. Your user has the ability to enter these commands. You will see something like this: -``` +```bash +------------+---------+----------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------------+---------+----------------------+------+-----------+-----------+ @@ -163,6 +163,6 @@ From this point, you can use the chapters from our "LXD Production Server" to co * [LXD Beginners Guide](../../guides/containers/lxd_web_servers.md) which will get you started using LXD productively. * [Official LXD Overview and Documentation](https://documentation.ubuntu.com/lxd/en/latest/) -## Conclusion +## Conclusion -LXD is a powerful tool that you can use on workstations or servers for increased productivity. On a workstation, it is great for lab testing, but can also keep semi-permanent instances of operating systems and applications available in their own private space. +LXD is a powerful tool that you can use on workstations or servers for increased productivity. On a workstation, it is great for lab testing, but can also keep semi-permanent instances of operating systems and applications available in their own private space.