diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md index b125b1a0b..2df657a94 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md @@ -87,9 +87,13 @@ This use-case will focus on conversion from CentOS (though this could be another - The **CentOS7 Development** inventory source **Details** view will be displayed. Click on the variables expansion button on the side right. + ![Controller inventories keyed_groups](images/update_controller_inventory_inventory_filter.png) + +- Looking at the Source variables, first let's look at `filters` and `hostnames`. The `filters` section will allow definining which instances should be selected for inclusion within the given inventory. In this case, the tags `ContentView`, `Environment`, `Student`, and `guid` will be utilized...all instances with tags matching the current values defined for each tag, will be selected. The `hostnames` sections allows defining how names of filtered resources will be definined in the inventory. In this case, the value currently defined with tag `NodeName` will be utilized for the name within the inventory. + ![Controller inventories keyed_groups](images/update_controller_inventory_05.png) -- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, when a dynamic inventory generation event is executed, if the EC2 inventory plugin comes across an instance with the "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to "stack02" and the "AnsibleGroup" tag is set to "appdbs", then the inventory group "stack02_appdbs" will be created (or confirmed if already existing) and that instance will be assigned to the group. +- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, given the instances that are selected via the filters in the previous section, if any of these instances are currently tagged with "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to `stack02` and the "AnsibleGroup" tag is set to `appdbs`, then the inventory group `stack02_appdbs` will be created (or confirmed if already existing) and that instance will be assigned to the `stack02_appdbs` group. - Click on "Done" in the Source variables exapanded view. diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png new file mode 100644 index 000000000..29c073ee5 Binary files /dev/null and b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png differ diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md index 24e5df6d9..3234c1324 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md @@ -75,6 +75,10 @@ One of the prerequisites for successful Convert2RHEL OS conversions is that the - We can see that the CentOS hosts have some package updates that can be applied. + > **Note** + > + > The `/var/cache/yum` directory is utilized by the `yum` utility to cache RPM metadata and packages that have been accessed/installed. Over time, the space that this directory consumes can grow to significant amounts, often causing the `/var` filesystem to max out. In case you run the **OS / Patch OS to latest** job template and it errors out due to `/var` filesystem space exhaustion, run the **UTILITY / Clear yum cache** job template against the *CentOS7_Dev* inventory group to free up available space on the `/var` filesystem. As an option, consider running the **UTILITY / Clear yum cache** job template prior to the **OS / Patch OS to latest** job template as a preemptive step to ensure `/var` filesystem space exhaustion is not an issue. + - Return to the AAP Web UI browser tab and navigate to Resources > Templates by clicking on "Templates" under the "Resources" group in the navigation menu: ![Job templates listed on AAP Web UI](images/aap_templates_2.png) diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/images/patch_os_limit_dialog.png b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/images/patch_os_limit_dialog.png index 69439eff2..1429883ee 100644 Binary files a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/images/patch_os_limit_dialog.png and b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/images/patch_os_limit_dialog.png differ diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md index c0fc32495..172efa721 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md @@ -90,7 +90,17 @@ less /var/log/convert2rhel/convert2rhel-pre-conversion.txt - When the pre-conversion report is generated, the Convert2RHEL framework collects system data and assesses convertability based on a large collection of checks. When any of these checks uncovers a potential risk, it is recorded as a finding in the report. -- The good news is that the warning regarding the "third party" package `katello-ca-consumer-satellite` is something we can ignore, as this package is part of the registration of the CentOS system to Satellite. And we can ignore the warnings about an outdated version of convert2rhel, as we know that we are not using the most up to date version. +- The good news is that the warning regarding the "third party" package `katello-ca-consumer-satellite`: +`(WARNING) LIST_THIRD_PARTY_PACKAGES::THIRD_PARTY_PACKAGE_DETECTED - Third party packages detected` +...is something we can ignore, as this package is part of the registration of the CentOS system to Satellite. + +- Other potential findings that might be present: + - `(ERROR) REMOVE_EXCLUDED_PACKAGES::EXCLUDED_PACKAGE_REMOVAL_FAILED - Failed to remove excluded package` + This finding typically means that a package and version that is present on the system to be converted can not be found during the conversion analysis and as such the built-in rollback functionality within the `convert2rhel` utility would not be able to successfully rollback a conversion attempt if something went wrong. + - `(OVERRIDABLE) PACKAGE_UPDATES::OUT_OF_DATE_PACKAGES - Outdated packages detected` + and + `(WARNING) CONVERT2RHEL_LATEST_VERSION::ALLOW_OLDER_VERSION_ENVIRONMENT_VARIABLE - Outdated convert2rhel version detected` + ...both relate to not using the most up-to-date version of the `convert2rhel` utility. Remember, in order to maintain optimal stability for the lab/workshop/demo environment, we pin the version of the `convert2rhel` utility to a specific release, so we can closely manage the state of the code base to ensure optimal stability for the lab/workshop/demo environment. As time passes and new versions can be tested against the workshop environment, updates will be made, so you may or may not experience the above issues in your pre-conversions analysis results. ### Challenge Lab: What if we were to experience warnings we are unsure of? diff --git a/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/README.md b/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/README.md index e002edbc6..3c4db7bb3 100644 --- a/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/README.md +++ b/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/README.md @@ -87,9 +87,13 @@ This use-case will focus on the in-place upgrade of RHEL to the next major versi - The **RHEL7 Development** inventory source **Details** view will be displayed. Click on the Source variables expansion button on the side right. + ![Controller inventories keyed_groups](images/update_controller_inventory_inventory_filter.png) + +- Looking at the Source variables, first let's look at `filters` and `hostnames`. The `filters` section will allow definining which instances should be selected for inclusion within the given inventory. In this case, the tags `ContentView`, `Environment`, `Student`, and `guid` will be utilized...all instances with tags matching the current values defined for each tag, will be selected. The `hostnames` sections allows defining how names of filtered resources will be definined in the inventory. In this case, the value currently defined with tag `NodeName` will be utilized for the name within the inventory. + ![Controller inventories keyed_groups](images/update_controller_inventory_05.png) -- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, when a dynamic inventory generation event is executed, if the EC2 inventory plugin comes across an instance with the "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to "stack01" and the "AnsibleGroup" tag is set to "appdbs", then the inventory group "stack01_appdbs" will be created (or confirmed if already existing) and that instance will be assigned to the group. +- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, given the instances that are selected via the filters in the previous section, if any of these instances are currently tagged with "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to `stack01` and the "AnsibleGroup" tag is set to `appdbs`, then the inventory group `stack01_appdbs` will be created (or confirmed if already existing) and that instance will be assigned to the `stack01_appdbs` group. - Click on "Done" in the Source variables exapanded view. diff --git a/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png b/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png new file mode 100644 index 000000000..82e769c2a Binary files /dev/null and b/exercises/rhdp_auto_satellite/4-ripu/1.2-three-tier-app/images/update_controller_inventory_inventory_filter.png differ diff --git a/roles/manage_ec2_instances/tasks/inventory/addhost_network.yml b/roles/manage_ec2_instances/tasks/inventory/addhost_network.yml index 5a4607c7b..cfd4749db 100644 --- a/roles/manage_ec2_instances/tasks/inventory/addhost_network.yml +++ b/roles/manage_ec2_instances/tasks/inventory/addhost_network.yml @@ -40,6 +40,7 @@ username: "{{ item.tags.Student }}" ansible_user: "{{ item.tags.username }}" ansible_port: "{{ ssh_port }}" + ansible_libssh_publickey_algorithms: "ssh-rsa" ansible_ssh_private_key_file: "{{ playbook_dir }}/{{ ec2_name_prefix|lower }}/{{ ec2_name_prefix|lower }}-private.pem" private_ip: "{{ item.private_ip_address }}" ansible_network_os: "{{ item.tags.ansible_network_os }}" diff --git a/roles/manage_ec2_instances/tasks/resources/resources.yml b/roles/manage_ec2_instances/tasks/resources/resources.yml index 497a1b0c2..758499f24 100644 --- a/roles/manage_ec2_instances/tasks/resources/resources.yml +++ b/roles/manage_ec2_instances/tasks/resources/resources.yml @@ -139,7 +139,7 @@ amazon.aws.ec2_key: name: "{{ ec2_name_prefix }}-key" region: "{{ ec2_region }}" - key_type: "ed25519" + # key_type: "ed25519" need to fix for juniper register: create_key - name: save private key