Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vagrantfiles and Ansible Inventory #272

Merged
merged 5 commits into from
Oct 2, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .ansible-lint
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ use_default_rules: true

skip_list:
- name[casing]
- name[play]

exclude_paths:
- .github
Expand Down
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

vagrant/*/.vagrant
vagrant/*/*pub
vagrant/*/*zip
vagrant/*/*ZIP
38 changes: 37 additions & 1 deletion README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ cd vagrant/dbfs
VAGRANT_EXPERIMENTAL=disks vagrant up
----

IMPORTANT: Copy `LINUX.X64_190000_db_home.zip` into `/tmp` on Vagrantbox
IMPORTANT: Copy `LINUX.X64_193000_grid_home.zip` into `/vagrant` in same directory as the Vagrantfile.


#### Start Playbook
Expand All @@ -77,6 +77,42 @@ The `playbook/os_vagrant.yml` fixes that.
ansible-playbook -e hostgroup=all -i inventory/dbfs/hosts.yml playbooks/os_vagrant.yml playbooks/single-instance-fs.yml
----

### Single Instance with Oracle Restart (HAS) in Vagrantbox


#### Vagrantbox for Database Server

Requirements::
8500 MB RAM: +
20 GB Diskspace:

The Inventory from has expects a 2nd disk at `/dev/sdb`.
See `host_fs_layout` as example.

The Vagrantfile is in `vagrant/has` of `ansible-oracle`.

IMPORTANT: The `VAGRANT_EXPERIMENTAL=disks` is very important.
It adds the 2nd disk for LVM and 2 ASM-Disks to the VM during the startup.

.Start Vagrantbox
----
cd vagrant/has
VAGRANT_EXPERIMENTAL=disks vagrant up
----

IMPORTANT: Copy `LINUX.X64_193000_grid_home.zip` and `LINUX.X64_190000_db_home.zip` into `/vagrant` in same directory as the Vagrantfile.


#### Start Playbook

IMPORTANT: The used Vagrantbox does not configure the `/etc/hosts` for Oracle correctly.
The `playbook/os_vagrant.yml` fixes that. +

.execute playbook
----
ansible-playbook -e hostgroup=all -i inventory/has/hosts.yml playbooks/os_vagrant.yml playbooks/single-instance-asm.yml
----

= Roles

== common
Expand Down
8 changes: 8 additions & 0 deletions changelogs/fragments/272-vagrant.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
minor_changes:
- "vagrant: Vagrantfile for dbfs & has (#272)"
- "inventory: New Inventory for has (#272)"
removed_features:
- "desupported leftover racattackl-install.yml (#272)"
bugfixes:
- "ansible-lint: removed name[play] from execptions (#272)"
2 changes: 1 addition & 1 deletion inventory/dbfs/group_vars/all/software_src.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ oracle_sw_copy: false
oracle_sw_unpack: false

# Directory for Installation-Media
oracle_stage_remote: /tmp
oracle_stage_remote: /vagrant

# Example for Remote NFS
# install_from_nfs: true # Mount NFS-Share?
Expand Down
25 changes: 25 additions & 0 deletions inventory/has/group_vars/all/asm.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
oracle_install_version_gi: 19.3.0.0
apply_patches_gi: false

oracle_asm_init_dg: data # 1st diskgroup

asm_diskgroups: # ASM Diskgroups used for DB-storage. Should map to dict asm_storage_layout.
- diskgroup: data
state: present
properties:
- {redundancy: external, ausize: 4}
attributes:
- {name: compatible.rdbms, value: "19.0.0.0.0"}
- {name: compatible.asm, value: "19.0.0.0.0"}
disk:
- {device: /dev/sdc, asmlabel: data01}
- diskgroup: fra
state: present
properties:
- {redundancy: external, ausize: 4}
attributes:
- {name: compatible.rdbms, value: "19.0.0.0.0"}
- {name: compatible.asm, value: "19.0.0.0.0"}
disk:
- {device: /dev/sdd, asmlabel: fra01}
96 changes: 96 additions & 0 deletions inventory/has/group_vars/all/database.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
# This is an example for 1 Instance on 1 Host.
# Please look at ansible-oracle-inventory for more complex configurations:
# https://github.com/opitzconsulting/ansible-oracle-inventory
#
oracle_databases:
- home: &db_config_home 19300-base
oracle_db_name: &oracle_db_name ORCL
oracle_db_type: SI # Type of database (RAC,RACONENODE,SI)
is_container: true
storage_type: ASM # Database storage to be used. ASM or FS.
oracle_db_mem_totalmb: 1536 # Amount of RAM to be used for SGA
oracle_database_type: MULTIPURPOSE # MULTIPURPOSE|DATA_WAREHOUSING|OLTP
redolog_size: 75M
redolog_groups: 3
datafile_dest: +DATA
recoveryfile_dest: +FRA
# listener_name: LISTENER # This home will have a listener configured
listener_port: &cdb_listener_port 1521
# *local_listener is used in initparam as an achor
local_listener: &local_listener "'{{ ansible_hostname }}:1521'"
archivelog: false
flashback: false
force_logging: true
state: present
statspack:
purgedays: 14
snaplevel: 5
state: present
tablespaces:
- {name: system, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
- {name: sysaux, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
- {name: undotbs1, size: 10M, autoextend: true, next: 50M, maxsize: 8G, content: permanent, state: present, bigfile: false}
- {name: users, size: 10M, autoextend: true, next: 50M, maxsize: 2G, content: permanent, state: present, bigfile: false}
- {name: temp, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
init_parameters:
- {name: audit_trail, value: 'NONE', scope: spfile, state: present}
- {name: processes, value: '400', scope: spfile, state: present, dbca: false}
# - {name: local_listener, value: *local_listener, scope: both, state: present}
- {name: archive_lag_target, value: '900', scope: both, state: present}
- {name: control_management_pack_access, value: 'NONE', scope: both, state: present}
- {name: control_file_record_keep_time, value: '30', scope: both, state: present}
- {name: db_files, value: '200', scope: spfile, state: present}
- {name: deferred_segment_creation, value: 'false', scope: both, state: present}
- {name: filesystemio_options, value: 'setall', scope: spfile, state: present}
- {name: job_queue_processes, value: '10', scope: both, state: present}
# Disable forcing hugepages on really small systems
# - {name: use_large_pages ,value: 'ONLY', scope: spfile, state: present}
- {name: log_archive_dest_1, value: 'location=USE_DB_RECOVERY_FILE_DEST', scope: both, state: present}
- {name: log_buffer, value: '64M', scope: spfile, state: present}
- {name: pga_aggregate_target, value: '200M', scope: both, state: present, dbca: false}
- {name: sga_target, value: '1800M', scope: spfile, state: present, dbca: false}
- {name: shared_pool_size, value: '768M', scope: both, state: present, dbca: true}
- {name: recyclebin, value: 'off', scope: spfile, state: present}
- {name: standby_file_management, value: 'AUTO', scope: both, state: present}
- {name: streams_pool_size, value: '152M', scope: spfile, state: present}
- {name: "_cursor_obsolete_threshold", value: '1024', scope: spfile, state: present}
- {name: max_pdbs, value: '3', scope: both, state: present}
- {name: clonedb, value: 'true', scope: spfile, state: present, dbca: false}
- {name: db_create_file_dest, value: '+DATA', scope: both, state: present}
- {name: db_create_online_log_dest_1, value: '+DATA', scope: both, state: present}
- {name: db_recovery_file_dest_size, value: '10G', scope: both, state: present, dbca: false}

profiles:
- name: DEFAULT
state: present
attributes:
- {name: password_life_time, value: unlimited}

users:
- schema: dbsnmp
state: unlocked
update_password: always

rman_jobs:
- {name: parameter}
- {name: offline_level0, disabled: false, weekday: "0", hour: "01", minute: "10", day: "*"}

oracle_pdbs:
- home: *db_config_home
listener_port: *cdb_listener_port
cdb: *oracle_db_name
pdb_name: ORCLPDB
state: present
profiles: "{{ oracle_default_profiles }}"
statspack:
purgedays: 14
snaplevel: 7
state: present

tablespaces:
- {name: system, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
- {name: sysaux, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
- {name: undotbs1, size: 10M, autoextend: true, next: 50M, maxsize: 8G, content: permanent, state: present, bigfile: false}
- {name: users, size: 10M, autoextend: true, next: 50M, maxsize: 2G, content: permanent, state: present, bigfile: false}
- {name: temp, size: 10M, autoextend: true, next: 50M, maxsize: 4G, content: permanent, state: present, bigfile: false}
18 changes: 18 additions & 0 deletions inventory/has/group_vars/all/dev-sec.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
# Oracle gets problems, when root processes are not visible
hidepid_option: 0

os_security_kernel_enable_module_loading: true

sysctl_overwrite:
network_ipv6_enable: false
fs.protected_regular: 0 # needed for opatchauto ...

# ssh settings
ssh_print_last_log: true
ssh_allow_agent_forwarding: false
ssh_permit_tunnel: false
ssh_allow_tcp_forwarding: 'no'
ssh_max_auth_retries: 3

ssh_allow_users: vagrant ansible
28 changes: 28 additions & 0 deletions inventory/has/group_vars/all/host.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
configure_public_yum_repo: false
configure_motd: false

configure_hugepages_by: memory

# disable hugepages on small systems
# don't forget to enable use_large_pages in oracle parameter
# size_in_gb_hugepages: 2
size_in_gb_hugepages: 0


configure_host_disks: true

host_fs_layout:
- vgname: vgora
state: present
filesystem:
- {mntp: /u01, lvname: orabaselv, lvsize: 40G, fstype: xfs}
- {mntp: swap, lvname: swaplv, lvsize: 16g, fstype: swap}
disk:
- {device: /dev/sdb, pvname: /dev/sdb1}
- vgname: rootvg
state: present
filesystem:
- {mntp: /tmp, lvname: tmplv, lvsize: 1200m, fstype: ext4}
disk:
- {device: /dev/sda, pvname: /dev/sda2}
13 changes: 13 additions & 0 deletions inventory/has/group_vars/all/software_src.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
is_sw_source_local: true
oracle_sw_copy: false
oracle_sw_unpack: false

# Directory for Installation-Media
oracle_stage_remote: /vagrant

# Example for Remote NFS
# install_from_nfs: true # Mount NFS-Share?
# nfs_server_sw: 192.168.56.99 # NFS-Server
# nfs_server_sw_path: /sw # NFS-Share
# oracle_stage_remote: /u01/se # local mount point for NFS share
8 changes: 8 additions & 0 deletions inventory/has/hosts.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
all:
children:
has:
hosts:
has-192-168-56-162.nip.io:
ansible_connection: ssh
ansible_ssh_user: vagrant
18 changes: 12 additions & 6 deletions playbooks/single-instance-asm.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
---
- import_playbook: dev-sec.yml
- import_playbook: os.yml
- import_playbook: sql-zauberkastern.yml
- import_playbook: swgi.yml
- import_playbook: swdb.yml
- import_playbook: manage-db.yml
- name: Import_playbook dev-sec
import_playbook: dev-sec.yml
- name: Import_playbook os
import_playbook: os.yml
- name: Import_playbook sql-zauberkastern
import_playbook: sql-zauberkastern.yml
- name: Import_playbook swgi
import_playbook: swgi.yml
- name: Import_playbook swdb
import_playbook: swdb.yml
- name: Import_playbook manage-db
import_playbook: manage-db.yml
15 changes: 10 additions & 5 deletions playbooks/single-instance-fs.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
---
- import_playbook: dev-sec.yml
- import_playbook: os.yml
- import_playbook: sql-zauberkastern.yml
- import_playbook: swdb.yml
- import_playbook: manage-db.yml
- name: Import_playbook dev-sec
import_playbook: dev-sec.yml
- name: Import_playbook os
import_playbook: os.yml
- name: Import_playbook sql-zauberkastern
import_playbook: sql-zauberkastern.yml
- name: Import_playbook swdb
import_playbook: swdb.yml
- name: Import_playbook manage-db
import_playbook: manage-db.yml
Loading