diff --git a/.github/ISSUE_TEMPLATE/dev-testing-request.md b/.github/ISSUE_TEMPLATE/dev-testing-request.md new file mode 100644 index 0000000000..b6de07f3b0 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/dev-testing-request.md @@ -0,0 +1,38 @@ +--- +name: DEV testing request +about: Request the QA team to perform some type of testing related with development and code closure, either manually or automatically. +labels: status/not-tracked, team/qa, type/dev-testing +assignees: '' + +--- + +| Target version | Related issue | Related PR | +|--------------------|--------------------|-----------------| +| | | | + + + +## Description + + +## Proposed checks + + +- [ ] Proposed check 1 +- [ ] Proposed check 2 +... + +## Steps to reproduce + + +## Expected results + + +## Configuration and considerations + diff --git a/.github/ISSUE_TEMPLATE/enhancement-request.md b/.github/ISSUE_TEMPLATE/enhancement-request.md new file mode 100644 index 0000000000..d17606e145 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/enhancement-request.md @@ -0,0 +1,18 @@ +--- +name: Enhancement request +about: 'Request to enhance a repository resource: framework, test, documentation ...' +labels: status/not-tracked, team/qa, type/enhancement +assignees: '' + +--- + + + +## Description + + +### Current behavior + + +### Expected behavior + diff --git a/.github/ISSUE_TEMPLATE/release-testing-request.md b/.github/ISSUE_TEMPLATE/release-testing-request.md new file mode 100644 index 0000000000..61e10e7c80 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/release-testing-request.md @@ -0,0 +1,41 @@ +--- +name: Release testing request +about: Request the QA team to perform some type of testing related with pre-release, either manually or automatically. +labels: status/not-tracked, team/qa, type/release-testing +assignees: '' + +--- + +| Target version | Release candidate | Main release testing issue | +|--------------------|--------------------|--------------------| +| | | | + + + +## Deployment configuration and installation options + + +## Description + + +## Proposed checks + + +- [ ] Proposed check 1 +- [ ] Proposed check 2 +... + +## Steps to reproduce + + +## Expected results + + +## Configuration and considerations + diff --git a/.github/ISSUE_TEMPLATE/tests-development-request.md b/.github/ISSUE_TEMPLATE/tests-development-request.md new file mode 100644 index 0000000000..8485620913 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/tests-development-request.md @@ -0,0 +1,30 @@ +--- +name: Tests development request +about: Request the QA team to develop and automate new tests. +labels: status/not-tracked, team/qa, type/tests-development +assignees: '' + +--- + +| Target version | Related issue | Related PR/dev branch | +|--------------------|--------------------|-----------------| +| | | | + + + +## Description + + +## Configurations + + +## Proposed test cases + + +- [ ] Proposed test case 1 +- [ ] Proposed test case 2 +... + +## Considerations + diff --git a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md b/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md deleted file mode 100644 index 89dd02bafa..0000000000 --- a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md +++ /dev/null @@ -1,41 +0,0 @@ -|Related issue| -|---| -|| - - - -## Description - - - -## Configuration options - - - -## Logs example - - - -## Tests - -- [ ] Proven that tests **pass** when they have to pass. -- [ ] Proven that tests **fail** when they have to fail. - -- [ ] Python codebase satisfies PEP-8 style style guide. `pycodestyle --max-line-length=120 --show-source --show-pep8 file.py`. -- [ ] Python codebase is documented following the Google Style for Python docstrings. -- [ ] The test is documented in wazuh-qa/docs. -- [ ] `provision_documentation.sh` generate the docs without errors. \ No newline at end of file diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 89dd02bafa..ac999c10d8 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,41 +1,37 @@ |Related issue| -|---| -|| +|-------------| +| | - + -## Description + +### Added + +- Item 1 +- Item n - + +### Updated -## Configuration options +- Item 1 +- Item n - + +### Deleted -## Logs example +- Item 1 +- Item n - +--- -## Tests +## Testing performed -- [ ] Proven that tests **pass** when they have to pass. -- [ ] Proven that tests **fail** when they have to fail. - -- [ ] Python codebase satisfies PEP-8 style style guide. `pycodestyle --max-line-length=120 --show-source --show-pep8 file.py`. -- [ ] Python codebase is documented following the Google Style for Python docstrings. -- [ ] The test is documented in wazuh-qa/docs. -- [ ] `provision_documentation.sh` generate the docs without errors. \ No newline at end of file + + + +| Tester | Test path | Jenkins | Local | OS | Commit | Notes | +|--------------------|-----------|---------|--------|-----|--------|----------------------| +| @user (Developer) | | ⚫⚫⚫ | ⚫⚫⚫ | | | Nothing to highlight | +| @user (Reviewer) | | ⚫⚫⚫ | :no_entry_sign: :no_entry_sign: :no_entry_sign: | | | Nothing to highlight | diff --git a/CHANGELOG.md b/CHANGELOG.md index b062adc0a8..05088acfca 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,49 @@ All notable changes to this project will be documented in this file. +## [4.6.0] - Development (unreleased) + +Wazuh commit: TBD \ +Release report: TBD + +### Changed +- Update schema database version([#3879](https://github.com/wazuh/wazuh-qa/pull/3879)) \- (Tests) + +## [4.5.0] - Development (unreleased) + +Wazuh commit: TBD \ +Release report: TBD + +### Added + +- Add tests with new options to avoid FIM synchronization overlapping. ([#3318](https://github.com/wazuh/wazuh-qa/pull/3318)) \- (Framework + tests) +- Add Logcollector millisecond granularity support test case ([#3910](https://github.com/wazuh/wazuh-qa/pull/3910)) \- (Tests) +- Add Windows System folders FIM monitoring tests ([#3720](https://github.com/wazuh/wazuh-qa/pull/3720)) \- (Tests) +- Add 'test_whodata_policy_changes' tests ([#3627](https://github.com/wazuh/wazuh-qa/pull/3627)) \- (Framework + Tests) +- Add test to check if active-response netsh generates alerts when firewall is disabled. ([#3787](https://github.com/wazuh/wazuh-qa/pull/3787)) \- (Framework + Tests) +- Add new tests for logcollector 'ignore' and 'restrict' options ([#3582](https://github.com/wazuh/wazuh-qa/pull/3582)) \- (Tests) +- Add 'Force reconnect' feature to agent_simulator tool. ([#3111](https://github.com/wazuh/wazuh-qa/pull/3111)) \- (Tools) +- Add new module to support migration tool. ([#3837](https://github.com/wazuh/wazuh-qa/pull/3837)) + +### Changed + +- Update FIM `test_audit` tests to new framework ([#3939](https://github.com/wazuh/wazuh-qa/pull/3939)) \- (Framework + Tests) +- Update FIM test to new FIM DBSync process ([#2728](https://github.com/wazuh/wazuh-qa/pull/2728)) \- (Framework + Tests) +- Update file_limit and registry_limit tests ([#3280](https://github.com/wazuh/wazuh-qa/pull/3280)) \- (Tests) +- Change expected timestamp for proftpd analysisd test predecoder test case ([#3900](https://github.com/wazuh/wazuh-qa/pull/3900)) \- (Tests) +- Skip test_large_changes test module ([#3783](https://github.com/wazuh/wazuh-qa/pull/3783)) \- (Tests) +- Update report_changes tests ([#3405](https://github.com/wazuh/wazuh-qa/pull/3405)) \- (Tests) +- Update Authd force_insert tests ([#3379](https://github.com/wazuh/wazuh-qa/pull/3379)) \- (Tests) +- Update cluster logs in reliability tests ([#2772](https://github.com/wazuh/wazuh-qa/pull/2772)) \- (Tests) +- Use correct version format in agent_simulator tool ([#3198](https://github.com/wazuh/wazuh-qa/pull/3198)) \- (Tools) + +### Fixed + +- Fix imports and add windows support for test_report_changes_and_diff IT ([#3548](https://github.com/wazuh/wazuh-qa/issues/3548)) \- (Framework + Tests) +- Fix a regex error in the FIM integration tests ([#3061](https://github.com/wazuh/wazuh-qa/issues/3061)) \- (Framework + Tests) +- Fix an error in the cluster performance tests related to CSV parser ([#2999](https://github.com/wazuh/wazuh-qa/pull/2999)) \- (Framework + Tests) + + ## [4.4.0] - Development (unreleased) Wazuh commit: TBD \ @@ -9,6 +52,8 @@ Release report: TBD ### Added +- Add new integration test for `authd` to validate error when `authd.pass` is empty ([#3721](https://github.com/wazuh/wazuh-qa/pull/3721)) \- (Framework + Tests) +- Add new test to check missing fields in `cpe_helper.json` file ([#3766](https://github.com/wazuh/wazuh-qa/pull/3766)) \- (Framework + Tests) - Add multigroups tests cases for `test_assign_groups_guess` ([#3979](https://github.com/wazuh/wazuh-qa/pull/3979)) \- (Tests) - Fix test_agent_groups system test ([#3955](https://github.com/wazuh/wazuh-qa/pull/3964)) \- (Tests) - Add new group_hash case and update the `without condition` case output in `wazuh_db/sync_agent_groups_get` ([#3959](https://github.com/wazuh/wazuh-qa/pull/3959)) \- (Tests) @@ -64,6 +109,7 @@ Release report: TBD - Add `monitord.rotate_log` to `local_internal_options` file for `test_macos_format_query` ([#3602](https://github.com/wazuh/wazuh-qa/pull/3602)) \- (Tests) - Adapt analysisd integration tests for EPS ([#3559](https://github.com/wazuh/wazuh-qa/issues/3559)) \- (Tests) - Improve `test_remove_audit` FIM test to retry install and remove command ([#3562](https://github.com/wazuh/wazuh-qa/pull/3562)) \- (Tests) +- Update pattern and expected condition for multi_groups tests ([#3565](https://github.com/wazuh/wazuh-qa/pull/3565)) \- (Tests) - Skip unstable integration tests for gcloud ([#3531](https://github.com/wazuh/wazuh-qa/pull/3531)) \- (Tests) - Skip unstable integration test for agentd ([#3538](https://github.com/wazuh/wazuh-qa/pull/3538)) - Update wazuhdb_getconfig and EPS limit integration tests ([#3146](https://github.com/wazuh/wazuh-qa/pull/3146)) \- (Tests) diff --git a/deps/wazuh_testing/setup.py b/deps/wazuh_testing/setup.py index 4f59d6c320..f998c6f34a 100644 --- a/deps/wazuh_testing/setup.py +++ b/deps/wazuh_testing/setup.py @@ -28,7 +28,9 @@ 'qa_ctl/deployment/vagrantfile_template.txt', 'qa_ctl/provisioning/wazuh_deployment/templates/preloaded_vars.conf.j2', 'data/qactl_conf_validator_schema.json', - 'data/all_disabled_ossec.conf' + 'data/all_disabled_ossec.conf', + 'tools/migration_tool/delta_schema.json', + 'tools/migration_tool/CVE_JSON_5.0_bundled.json' ] scripts_list = [ @@ -60,7 +62,7 @@ def get_files_from_directory(directory): setup( name='wazuh_testing', - version='4.4.0', + version='4.6.0', description='Wazuh testing utilities to help programmers automate tests', url='https://github.com/wazuh', author='Wazuh', diff --git a/deps/wazuh_testing/wazuh_testing/__init__.py b/deps/wazuh_testing/wazuh_testing/__init__.py index acb4b49b6d..24d541af85 100644 --- a/deps/wazuh_testing/wazuh_testing/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/__init__.py @@ -24,7 +24,6 @@ WAZUH_CONF_PATH = os.path.join(WAZUH_PATH, 'etc', 'ossec.conf') WAZUH_LOGS_PATH = os.path.join(WAZUH_PATH, 'logs') CLIENT_KEYS_PATH = os.path.join(WAZUH_PATH, 'etc' if platform.system() == 'Linux' else '', 'client.keys') -DB_PATH = os.path.join(WAZUH_PATH, 'queue', 'db') QUEUE_DB_PATH = os.path.join(WAZUH_PATH, 'queue', 'db') QUEUE_SOCKETS_PATH = os.path.join(WAZUH_PATH, 'queue', 'sockets') WAZUH_DB_SOCKET_PATH = os.path.join(QUEUE_DB_PATH, 'wdb') @@ -39,6 +38,9 @@ API_JSON_LOG_FILE_PATH = os.path.join(WAZUH_PATH, 'logs', 'api.json') API_LOG_FOLDER = os.path.join(WAZUH_PATH, 'logs', 'api') WAZUH_TESTING_PATH = os.path.dirname(os.path.abspath(__file__)) +WAZUH_TESTING_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +DEFAULT_AUTHD_PASS_PATH = os.path.join(WAZUH_PATH, 'etc', 'authd.pass') + # Daemons LOGCOLLECTOR_DAEMON = 'wazuh-logcollector' @@ -71,6 +73,28 @@ T_60 = 60 +# Local internal options +WINDOWS_DEBUG = 'windows.debug' +SYSCHECK_DEBUG = 'syscheck.debug' +VERBOSE_DEBUG_OUTPUT = 2 + +# Wazuh Service commands +WAZUH_SERVICES_STOP = 'stop' +WAZUH_SERVICES_START = 'start' + + +# Configurations +DATA = 'data' +WAZUH_LOG_MONITOR = 'wazuh_log_monitor' + + +# File Types +FIFO = 'fifo' +SYMLINK = 'sym_link' +HARDLINK = 'hard_link' +SOCKET = 'socket' +REGULAR = 'regular' + # Protocols UDP = 'UDP' TCP = 'TCP' diff --git a/deps/wazuh_testing/wazuh_testing/data/syscheck_event_windows.json b/deps/wazuh_testing/wazuh_testing/data/syscheck_event_windows.json index 331071bd10..32ebe1ed60 100644 --- a/deps/wazuh_testing/wazuh_testing/data/syscheck_event_windows.json +++ b/deps/wazuh_testing/wazuh_testing/data/syscheck_event_windows.json @@ -30,12 +30,18 @@ ], "if": { "properties": { - "mode": {"const": "whodata"} + "mode": { + "const": "whodata" + } }, - "required": ["mode"] + "required": [ + "mode" + ] }, "then": { - "required": ["audit"] + "required": [ + "audit" + ] }, "properties": { "path": { @@ -52,7 +58,11 @@ "mode": { "$id": "#/properties/data/properties/mode", "type": "string", - "enum": ["realtime", "whodata", "scheduled"], + "enum": [ + "realtime", + "whodata", + "scheduled" + ], "examples": [ "whodata" ], @@ -61,7 +71,11 @@ "type": { "$id": "#/properties/data/properties/type", "type": "string", - "enum": ["added", "modified", "deleted"], + "enum": [ + "added", + "modified", + "deleted" + ], "examples": [ "added" ], @@ -70,7 +84,10 @@ "arch": { "$id": "#/properties/data/properties/arch", "type": "string", - "enum": ["[x32]", "[x64]"], + "enum": [ + "[x32]", + "[x64]" + ], "examples": [ "[x32]", "[x64]" @@ -121,7 +138,11 @@ "type": { "$id": "#/properties/data/properties/attributes/properties/type", "type": "string", - "enum": ["file", "registry_key", "registry_value"], + "enum": [ + "file", + "registry_key", + "registry_value" + ], "examples": [ "file" ], @@ -130,14 +151,24 @@ "value_type": { "$id": "#/properties/data/properties/attributes/properties/type", "type": "string", - "enum": ["REG_NONE", "REG_SZ", "REG_EXPAND_SZ", "REG_BINARY", "REG_DWORD", "REG_DWORD_BIG_ENDIAN", - "REG_LINK", "REG_MULTI_SZ", "REG_RESOURCE_LIST", "REG_FULL_RESOURCE_DESCRIPTOR", - "REG_RESOURCE_REQUIREMENTS_LIST", "REG_QWORD"], + "enum": [ + "REG_NONE", + "REG_SZ", + "REG_EXPAND_SZ", + "REG_BINARY", + "REG_DWORD", + "REG_DWORD_BIG_ENDIAN", + "REG_LINK", + "REG_MULTI_SZ", + "REG_RESOURCE_LIST", + "REG_FULL_RESOURCE_DESCRIPTOR", + "REG_RESOURCE_REQUIREMENTS_LIST", + "REG_QWORD" + ], "examples": [ "REG_SZ" ], "pattern": "^(.*)$" - }, "size": { "$id": "#/properties/data/properties/attributes/properties/size", @@ -151,69 +182,69 @@ "$id": "#/properties/data/properties/attributes/properties/perm", "type": "object", "patternProperties": { - "^S-[-0-9]*": { - "type": "object", - "properties": { - "name": { - "$id": "#/properties/data/properties/attributes/properties/perm/name", - "type": "string" - }, - "allowed": { - "$id": "#/properties/data/properties/attributes/properties/perm/allowed", - "type": "array", - "items": { - "$id": "#/properties/data/properties/attributes/properties/perm/allowed/items", - "type": "string", - "enum": [ - "generic_read", - "generic_write", - "generic_execute", - "generic_all", - "delete", - "read_control", - "write_dac", - "write_owner", - "synchronize", - "read_data", - "write_data", - "append_data", - "read_ea", - "write_ea", - "execute", - "read_attributes", - "write_attributes" - ] - } - }, - "denied": { - "$id": "#/properties/data/properties/attributes/properties/perm/denied", - "type": "array", - "items": { - "$id": "#/properties/data/properties/attributes/properties/perm/denied/items", - "type": "string", - "enum": [ - "generic_read", - "generic_write", - "generic_execute", - "generic_all", - "delete", - "read_control", - "write_dac", - "write_owner", - "synchronize", - "read_data", - "write_data", - "append_data", - "read_ea", - "write_ea", - "execute", - "read_attributes", - "write_attributes" - ] - } - } + "^S-[-0-9]*": { + "type": "object", + "properties": { + "name": { + "$id": "#/properties/data/properties/attributes/properties/perm/name", + "type": "string" + }, + "allowed": { + "$id": "#/properties/data/properties/attributes/properties/perm/allowed", + "type": "array", + "items": { + "$id": "#/properties/data/properties/attributes/properties/perm/allowed/items", + "type": "string", + "enum": [ + "generic_read", + "generic_write", + "generic_execute", + "generic_all", + "delete", + "read_control", + "write_dac", + "write_owner", + "synchronize", + "read_data", + "write_data", + "append_data", + "read_ea", + "write_ea", + "execute", + "read_attributes", + "write_attributes" + ] + } + }, + "denied": { + "$id": "#/properties/data/properties/attributes/properties/perm/denied", + "type": "array", + "items": { + "$id": "#/properties/data/properties/attributes/properties/perm/denied/items", + "type": "string", + "enum": [ + "generic_read", + "generic_write", + "generic_execute", + "generic_all", + "delete", + "read_control", + "write_dac", + "write_owner", + "synchronize", + "read_data", + "write_data", + "append_data", + "read_ea", + "write_ea", + "execute", + "read_attributes", + "write_attributes" + ] } + } } + } } }, "uid": { @@ -336,7 +367,7 @@ "group_name", "inode", "attributes" - ], + ], "examples": [ "size", "mtime", @@ -356,60 +387,174 @@ } }, "old_attributes": { - "required": ["type" - ], - "allOf": [ - {"$ref": "#/properties/data/properties/attributes"} - ] - }, - "audit": { - "$id": "#/properties/data/properties/audit", + "$id": "#/properties/data/properties/attributes", "type": "object", "required": [ - "process_id", - "process_name", - "user_id", - "user_name" + "type" ], "properties": { - "process_id": { - "$id": "#/properties/data/properties/audit/properties/process_id", - "type": "integer", - "default": 0, - "examples": [ - 1899 - ] - }, - "process_name": { - "$id": "#/properties/data/properties/audit/properties/process_name", + "type": { + "$id": "#/properties/data/properties/attributes/properties/type", "type": "string", - "default": "", + "enum": [ + "file", + "registry_key", + "registry_value" + ], "examples": [ - "/usr/bin/touch" + "file" ], "pattern": "^(.*)$" }, - "user_id": { - "$id": "#/properties/data/properties/audit/properties/user_id", + "value_type": { + "$id": "#/properties/data/properties/attributes/properties/type", "type": "string", - "default": "", + "enum": [ + "REG_NONE", + "REG_SZ", + "REG_EXPAND_SZ", + "REG_BINARY", + "REG_DWORD", + "REG_DWORD_BIG_ENDIAN", + "REG_LINK", + "REG_MULTI_SZ", + "REG_RESOURCE_LIST", + "REG_FULL_RESOURCE_DESCRIPTOR", + "REG_RESOURCE_REQUIREMENTS_LIST", + "REG_QWORD" + ], "examples": [ - "0" + "REG_SZ" ], - "pattern": "^([0-9a-fA-F]|S-1-.*)+$" + "pattern": "^(.*)$" }, - "user_name": { - "$id": "#/properties/data/properties/audit/properties/user_name", - "type": "string", - "default": "", + "size": { + "$id": "#/properties/data/properties/attributes/properties/size", + "type": "integer", + "default": 0, "examples": [ - "root" + 0 + ] + }, + "perm": { + "$id": "#/properties/data/properties/attributes/properties/perm", + "type": "object", + "patternProperties": { + "^S-[-0-9]*": { + "type": "object", + "properties": { + "name": { + "$id": "#/properties/data/properties/attributes/properties/perm/name", + "type": "string" + }, + "allowed": { + "$id": "#/properties/data/properties/attributes/properties/perm/allowed", + "type": "array", + "items": { + "$id": "#/properties/data/properties/attributes/properties/perm/allowed/items", + "type": "string", + "enum": [ + "generic_read", + "generic_write", + "generic_execute", + "generic_all", + "delete", + "read_control", + "write_dac", + "write_owner", + "synchronize", + "read_data", + "write_data", + "append_data", + "read_ea", + "write_ea", + "execute", + "read_attributes", + "write_attributes" + ] + } + }, + "denied": { + "$id": "#/properties/data/properties/attributes/properties/perm/denied", + "type": "array", + "items": { + "$id": "#/properties/data/properties/attributes/properties/perm/denied/items", + "type": "string", + "enum": [ + "generic_read", + "generic_write", + "generic_execute", + "generic_all", + "delete", + "read_control", + "write_dac", + "write_owner", + "synchronize", + "read_data", + "write_data", + "append_data", + "read_ea", + "write_ea", + "execute", + "read_attributes", + "write_attributes" + ] + } + } + } + } + } + }, + "audit": { + "$id": "#/properties/data/properties/audit", + "type": "object", + "required": [ + "process_id", + "process_name", + "user_id", + "user_name" ], - "pattern": "^(.*)$" + "properties": { + "process_id": { + "$id": "#/properties/data/properties/audit/properties/process_id", + "type": "integer", + "default": 0, + "examples": [ + 1899 + ] + }, + "process_name": { + "$id": "#/properties/data/properties/audit/properties/process_name", + "type": "string", + "default": "", + "examples": [ + "/usr/bin/touch" + ], + "pattern": "^(.*)$" + }, + "user_id": { + "$id": "#/properties/data/properties/audit/properties/user_id", + "type": "string", + "default": "", + "examples": [ + "0" + ], + "pattern": "^([0-9a-fA-F]|S-1-.*)+$" + }, + "user_name": { + "$id": "#/properties/data/properties/audit/properties/user_name", + "type": "string", + "default": "", + "examples": [ + "root" + ], + "pattern": "^(.*)$" + } + } } } } } } } -} +} \ No newline at end of file diff --git a/deps/wazuh_testing/wazuh_testing/fim.py b/deps/wazuh_testing/wazuh_testing/fim.py index 0ff0924825..1b814fbfa2 100644 --- a/deps/wazuh_testing/wazuh_testing/fim.py +++ b/deps/wazuh_testing/wazuh_testing/fim.py @@ -1093,20 +1093,6 @@ def callback_detect_integrity_event(line): return None -def callback_detect_registry_integrity_state_event(line): - event = callback_detect_integrity_event(line) - if event and event['component'] == 'fim_registry' and event['type'] == 'state': - return event['data'] - return None - - -def callback_detect_registry_integrity_clear_event(line): - event = callback_detect_integrity_event(line) - if event and event['component'] == 'fim_registry' and event['type'] == 'integrity_clear': - return True - return None - - def callback_detect_integrity_state(line): event = callback_detect_integrity_event(line) if event: @@ -1115,8 +1101,24 @@ def callback_detect_integrity_state(line): return None +def callback_start_synchronization(line): + """ Callback that detects if a line contains the FIM sync module has started. + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + if 'FIM sync module started' in line: + return line + return None + + def callback_detect_synchronization(line): - if 'Initializing FIM Integrity Synchronization check' in line: + """ Callback that detects if a line contains a FIM sync has started. + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + if 'Executing FIM sync' in line: return line return None @@ -1148,13 +1150,6 @@ def callback_audit_health_check(line): return None -def callback_audit_cannot_start(line): - match = re.match(r'.*Who-data engine could not start. Switching who-data to real-time.', line) - if match: - return True - return None - - def callback_audit_added_rule(line): match = re.match(r'.*Added audit rule for monitoring directory: \'(.+)\'', line) if match: @@ -2506,6 +2501,16 @@ def detect_initial_scan_start(file_monitor): error_message='Did not receive expected "File integrity monitoring scan started" event') +def detect_sync_initial_scan_start(file_monitor): + """Detect initial sync scan start. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + """ + file_monitor.start(timeout=60, callback=callback_start_synchronization, + error_message='Did not receive expected "FIM sync scan started" event') + + def detect_realtime_start(file_monitor): """Detect realtime engine start when restarting Wazuh. diff --git a/deps/wazuh_testing/wazuh_testing/fim_module/__init__.py b/deps/wazuh_testing/wazuh_testing/fim_module/__init__.py deleted file mode 100644 index f1bb1d3658..0000000000 --- a/deps/wazuh_testing/wazuh_testing/fim_module/__init__.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (C) 2015-2022, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -''' -The purpose of this file is to contain all the variables necessary for FIM in order to be easier to -maintain if one of them changes in the future. -''' - -# Variables -SIZE_LIMIT_CONFIGURED_VALUE = 10240 -# Key variables -WINDOWS_HKEY_LOCAL_MACHINE = 'HKEY_LOCAL_MACHINE' -MONITORED_KEY = 'SOFTWARE\\random_key' -MONITORED_KEY_2 = "SOFTWARE\\Classes\\random_key_2" -WINDOWS_REGISTRY = 'WINDOWS_REGISTRY' - - -# Value key -SYNC_INTERVAL = 'SYNC_INTERVAL' -SYNC_INTERVAL_VALUE = MAX_EVENTS_VALUE = 20 - -# Folders variables -TEST_DIR_1 = 'testdir1' -TEST_DIRECTORIES = 'TEST_DIRECTORIES' -TEST_REGISTRIES = 'TEST_REGISTRIES' - -# FIM modules -SCHEDULE_MODE = 'scheduled' - -# Yaml Configuration -YAML_CONF_REGISTRY_RESPONSE = 'wazuh_conf_registry_responses_win32.yaml' -YAML_CONF_SYNC_WIN32 = 'wazuh_sync_conf_win32.yaml' - -# Synchronization options -SYNCHRONIZATION_ENABLED = 'SYNCHRONIZATION_ENABLED' -SYNCHRONIZATION_REGISTRY_ENABLED = 'SYNCHRONIZATION_REGISTRY_ENABLED' - -# Callbacks message -INTEGRITY_CONTROL_MESSAGE = r'.*Sending integrity control message: (.+)$' -REGISTRY_DBSYNC_NO_DATA = r'.*#!-fim_registry dbsync no_data (.+)' -CB_FILE_LIMIT_CAPACITY = r".*Sending DB (\d+)% full alert." -CB_FILE_LIMIT_BACK_TO_NORMAL = r".*(Sending DB back to normal alert)." -CB_COUNT_REGISTRY_FIM_ENTRIES = r".*Fim registry entries: (\d+)" -CB_DATABASE_FULL_COULD_NOT_INSERT = r".*Couldn't insert ('.*')? entry into DB\. The DB is full.*" -CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE = r".*Couldn't insert ('.*')? value entry into DB\. The DB is full.*" -CB_FILE_LIMIT_VALUE = r".*Maximum number of entries to be monitored: '(\d+)'" -CB_FILE_SIZE_LIMIT_BIGGER_THAN_DISK_QUOTA = r".*Setting 'disk_quota' to (\d+), 'disk_quota' must be greater than 'file_size'" -CB_FILE_LIMIT_DISABLED = r".*(No limit set) to maximum number of entries to be monitored" -CB_INODE_ENTRIES_PATH_COUNT = r".*Fim inode entries: (\d+), path count: (\d+)" -CB_FIM_ENTRIES_COUNT =r".*Fim entries: (\d+)" -CB_DETECT_FIM_EVENT = r'.*Sending FIM event: (.+)$' - -#Error Messages -ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT = 'Did not receive expected "DEBUG: ...: Sending DB ...% full alert." event' -ERR_MSG_FIM_INODE_ENTRIES = 'Did not receive expected "Fim inode entries: ..., path count: ..." event' -ERR_MSG_DB_BACK_TO_NORMAL = 'Did not receive expected "DEBUG: ...: Sending DB back to normal alert." event' -ERR_MSG_DATABASE_FULL_ALERT_EVENT = 'Did not receive expected "DEBUG: ...: Sending DB 100% full alert." event' -ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT = 'Did not receive expected "DEBUG: ...: Couldn\'t insert \'...\' entry into DB. The DB is full, ..." event' -ERR_MSG_FILE_LIMIT_VALUES = 'Did not receive expected "DEBUG: ...: Maximum number of entries to be monitored: ..." event' -ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL = 'Wrong value for full database alert.' -ERR_MSG_DISK_QUOTA_MUST_BE_GREATER = "Did not receive expected 'DEBUG: ... disk_quota must be greater than file_size message'" -ERR_MSG_CONTENT_CHANGES_EMPTY = "content_changes is empty" -ERR_MSG_CONTENT_CHANGES_NOT_EMPTY = "content_changes isn't empty" -ERR_MSG_FILE_LIMIT_DISABLED = 'Did not receive expected "DEBUG: ...: No limit set to maximum number of entries to be monitored" event' -ERR_MSG_NO_EVENTS_EXPECTED = 'No events should be detected.' -ERR_MSG_DELETED_EVENT_NOT_RECIEVED = 'Did not receive expected deleted event' -ERR_MSG_WRONG_NUMBER_OF_ENTRIES = 'Wrong number of entries counted.' -ERR_MSG_WRONG_INODE_PATH_COUNT = 'Wrong number of inodes and path count' -ERR_MSG_WRONG_FILE_LIMIT_VALUE ='Wrong value for file_limit.' -ERR_MSG_WRONG_DISK_QUOTA_VALUE ='Wrong value for disk_quota' -ERR_MSG_WRONG_CAPACITY_LOG_DB_LIMIT= 'Wrong capacity log for DB file_limit' diff --git a/deps/wazuh_testing/wazuh_testing/fim_module/event_monitor.py b/deps/wazuh_testing/wazuh_testing/fim_module/event_monitor.py deleted file mode 100644 index bb18dc7573..0000000000 --- a/deps/wazuh_testing/wazuh_testing/fim_module/event_monitor.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (C) 2015-2022, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - - -import re -import json -from sys import platform -from wazuh_testing import logger -from wazuh_testing.fim_module import (CB_INODE_ENTRIES_PATH_COUNT, CB_FIM_ENTRIES_COUNT, CB_DETECT_FIM_EVENT) - - -def callback_detect_event(line): - msg = CB_DETECT_FIM_EVENT - match = re.match(msg, line) - if not match: - return None - - try: - json_event = json.loads(match.group(1)) - if json_event['type'] == 'event': - return json_event - except (json.JSONDecodeError, AttributeError, KeyError) as e: - logger.warning(f"Couldn't load a log line into json object. Reason {e}") - - -def callback_entries_path_count(line): - if platform != 'win32': - match = re.match(CB_INODE_ENTRIES_PATH_COUNT, line) - else: - match = re.match(CB_FIM_ENTRIES_COUNT, line) - - if match: - if platform != 'win32': - return match.group(1), match.group(2) - else: - return match.group(1), None diff --git a/deps/wazuh_testing/wazuh_testing/fim_module/fim_synchronization.py b/deps/wazuh_testing/wazuh_testing/fim_module/fim_synchronization.py deleted file mode 100644 index 889d2b8c8a..0000000000 --- a/deps/wazuh_testing/wazuh_testing/fim_module/fim_synchronization.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (C) 2015-2021, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -from wazuh_testing.fim import LOG_FILE_PATH, callback_detect_registry_integrity_state_event -from wazuh_testing import global_parameters -from wazuh_testing.fim_module.fim_variables import MAX_EVENTS_VALUE, CB_REGISTRY_DBSYNC_NO_DATA -from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback - - -def get_sync_msgs(tout, new_data=True): - """Look for as many synchronization events as possible. - - This function will look for the synchronization messages until a Timeout is raised or 'max_events' is reached. - - Args: - tout (int): Timeout that will be used to get the dbsync_no_data message. - new_data (bool): Specifies if the test will wait the event `dbsync_no_data`. - - Returns: - A list with all the events in json format. - """ - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) - events = [] - if new_data: - wazuh_log_monitor.start(timeout=tout, - callback=generate_monitoring_callback(CB_REGISTRY_DBSYNC_NO_DATA), - error_message='Did not receive expected ' - '"db sync no data" event') - for _ in range(0, MAX_EVENTS_VALUE): - try: - sync_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=callback_detect_registry_integrity_state_event, - accum_results=1, - error_message='Did not receive expected ' - 'Sending integrity control message"').result() - except TimeoutError: - break - - events.append(sync_event) - - return events - - -def find_value_in_event_list(key_path, value_name, event_list): - """Function that looks for a key path and value_name in a list of json events. - - Args: - path (str): Path of the registry key. - value_name (str): Name of the value. - event_list (list): List containing the events in JSON format. - - Returns: - The event that matches the specified path. None if no event was found. - """ - for event in event_list: - if 'value_name' not in event.keys(): - continue - - if event['path'] == key_path and event['value_name'] == value_name: - return event - - return None diff --git a/deps/wazuh_testing/wazuh_testing/fim_module/fim_variables.py b/deps/wazuh_testing/wazuh_testing/fim_module/fim_variables.py deleted file mode 100644 index d2b2f7ba3c..0000000000 --- a/deps/wazuh_testing/wazuh_testing/fim_module/fim_variables.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (C) 2015-2021, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -''' -The purpose of this file is to contain all the variables necessary for FIM in order to be easier to -maintain if one of them changes in the future. - -UPDATE: This file is deprecated. Add new variables to de fim_module/__init__.py file. If this is used -in a test, refactor the imports to adhere to the new standard. -''' - -# Variables -SIZE_LIMIT_CONFIGURED_VALUE = 10 * 1024 - -# Key Variables -WINDOWS_HKEY_LOCAL_MACHINE = 'HKEY_LOCAL_MACHINE' -MONITORED_KEY = 'SOFTWARE\\random_key' -MONITORED_KEY_2 = "SOFTWARE\\Classes\\random_key_2" -WINDOWS_REGISTRY = 'WINDOWS_REGISTRY' - - -# Value Key -SYNC_INTERVAL = 'SYNC_INTERVAL' -SYNC_INTERVAL_VALUE = MAX_EVENTS_VALUE = 20 - - -# Folder Variables -TEST_DIR_1 = 'testdir1' -TEST_DIRECTORIES = 'TEST_DIRECTORIES' -TEST_REGISTRIES = 'TEST_REGISTRIES' - - -# Syscheck Attributes -REPORT_CHANGES = 'report_changes' -DIFF_SIZE_LIMIT = 'diff_size_limit' -FILE_SIZE_ENABLED = 'FILE_SIZE_ENABLED' -FILE_SIZE_LIMIT = 'FILE_SIZE_LIMIT' -DISK_QUOTA_ENABLED = 'DISK_QUOTA_ENABLED' -DISK_QUOTA_LIMIT = 'DISK_QUOTA_LIMIT' - -# Syscheck Values -DIFF_LIMIT_VALUE = 2 -DIFF_DEFAULT_LIMIT_VALUE = 51200 - - -# FIM Modes -SCHEDULE_MODE = 'scheduled' - -# Yaml Configuration -YAML_CONF_REGISTRY_RESPONSE = 'wazuh_conf_registry_responses_win32.yaml' -YAML_CONF_SYNC_WIN32 = 'wazuh_sync_conf_win32.yaml' -YAML_CONF_DIFF = 'wazuh_conf_diff.yaml' - -# Synchronization Options -SYNCHRONIZATION_ENABLED = 'SYNCHRONIZATION_ENABLED' -SYNCHRONIZATION_REGISTRY_ENABLED = 'SYNCHRONIZATION_REGISTRY_ENABLED' - -# Callback Messages -CB_INTEGRITY_CONTROL_MESSAGE = r'.*Sending integrity control message: (.+)$' -CB_REGISTRY_DBSYNC_NO_DATA = r'.*#!-fim_registry dbsync no_data (.+)' -CB_FILE_LIMIT_CAPACITY = r".*Sending DB (\d+)% full alert." -CB_FILE_LIMIT_BACK_TO_NORMAL = r".*(Sending DB back to normal alert)." -CB_COUNT_REGISTRY_FIM_ENTRIES = r".*Fim registry entries: (\d+)" -CB_DATABASE_FULL_COULD_NOT_INSERT = r".*Couldn't insert '.*' (value )?entry into DB\. The DB is full.*" -CB_FILE_LIMIT_VALUE = r".*Maximum number of entries to be monitored: '(\d+)'" -CB_FILE_SIZE_LIMIT_BIGGER_THAN_DISK_QUOTA = r".*Setting 'disk_quota' to (\d+), 'disk_quota' must be greater than 'file_size'" -CB_MAXIMUM_FILE_SIZE = r'.*Maximum file size limit to generate diff information configured to \'(\d+) KB\'.*' -CB_FILE_LIMIT_CAPACITY = r".*Sending DB (\d+)% full alert." -CB_FILE_LIMIT_BACK_TO_NORMAL = r".*(Sending DB back to normal alert)." -CB_COUNT_REGISTRY_FIM_ENTRIES = r".*Fim registry entries: (\d+)" -CB_DATABASE_FULL_COULD_NOT_INSERT = r".*Couldn't insert '.*' (value )?entry into DB\. The DB is full.*" -CB_FILE_LIMIT_VALUE = r".*Maximum number of entries to be monitored: '(\d+)'" -CB_FILE_SIZE_LIMIT_BIGGER_THAN_DISK_QUOTA = r".*Setting 'disk_quota' to (\d+), 'disk_quota' must be greater than 'file_size'" -CB_MAXIMUM_FILE_SIZE = r'.*Maximum file size limit to generate diff information configured to \'(\d+) KB\'.*' - - -#Error Messages -ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT = 'Did not receive expected "DEBUG: ...: Sending DB ...% full alert." event' -ERR_MSG_FIM_INODE_ENTRIES = 'Did not receive expected "Fim inode entries: ..., path count: ..." event' -ERR_MSG_DB_BACK_TO_NORMAL = 'Did not receive expected "DEBUG: ...: Sending DB back to normal alert." event' -ERR_MSG_WRONG_NUMBER_OF_ENTRIES = 'Wrong number of entries counted.' -ERR_MSG_WRONG_FILE_LIMIT_VALUE ='Wrong value for file_limit.' -ERR_MSG_WRONG_DISK_QUOTA_VALUE ='Wrong value for disk_quota' -ERR_MSG_DATABASE_FULL_ALERT_EVENT = 'Did not receive expected "DEBUG: ...: Sending DB 100% full alert." event' -ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT = 'Did not receive expected "DEBUG: ...: Couldn\'t insert \'...\' entry into DB. The DB is full, ..." event' -ERR_MSG_FILE_LIMIT_VALUES = 'Did not receive expected "DEBUG: ...: Maximum number of entries to be monitored: ..." event' -ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL = 'Wrong value for full database alert.' -ERR_MSG_DISK_QUOTA_MUST_BE_GREATER = "Did not receive expected 'DEBUG: ... disk_quota must be greater than file_size message'" -ERR_MSG_CONTENT_CHANGES_EMPTY = "content_changes is empty" -ERR_MSG_CONTENT_CHANGES_NOT_EMPTY = "content_changes isn't empty" -ERR_MSG_MAXIMUM_FILE_SIZE = 'Did not receive expected "Maximum file size limit configured to \'... KB\'..." event' -ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE = 'Wrong value for diff_size_limit' diff --git a/deps/wazuh_testing/wazuh_testing/mocking/__init__.py b/deps/wazuh_testing/wazuh_testing/mocking/__init__.py index 284cd5c814..44e799bbc9 100644 --- a/deps/wazuh_testing/wazuh_testing/mocking/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/mocking/__init__.py @@ -68,6 +68,14 @@ 'os_arch': 'x86_64', 'config_sum': '', 'merged_sum': '', 'manager_host': 'alas2', 'node_name': 'node01', 'date_add': '1645538646', 'last_keepalive': '253402300799', 'sync_status': 'synced', 'connection_status': 'active'}, + 'ALAS_2022': {'hostname': 'alas2022', 'architecture': 'x86_64', 'os_name': 'Amazon Linux', 'os_version': '2022', + 'os_codename': '', 'os_major': '2022', 'os_minor': '', 'os_patch': '', 'os_build': '', + 'os_platform': 'amzn', 'sysname': 'Linux', 'release': '5.15.29-16.111.amzn2022.x86_64', + 'version': 'Wazuh v4.4.0', 'os_release': '', 'checksum': '1645538649327530789', 'name': 'alas2022', + 'ip': '127.0.0.1', 'register_ip': '127.0.0.1', 'internal_key': '', 'os_arch': 'x86_64', + 'config_sum': '', 'merged_sum': '', 'manager_host': 'alas2022', 'node_name': 'node01', + 'date_add': '1645538646', 'last_keepalive': '253402300799', 'sync_status': 'synced', + 'connection_status': 'active'}, 'RHEL8': {'os_name': 'CentOS Linux', 'os_major': '8', 'os_minor': '1', 'os_platform': 'centos', 'name': 'centos8', 'connection_status': 'active'}, 'RHEL7': {'os_name': 'CentOS Linux', 'os_major': '7', 'os_minor': '1', 'os_platform': 'centos', 'os_version': '7.0', @@ -239,7 +247,7 @@ def delete_mocked_agent(agent_id): global_db.delete_agent(agent_id) # Remove agent id DB file if exists - remove_file(os.path.join(wazuh_testing.DB_PATH, f"{agent_id}.db")) + remove_file(os.path.join(wazuh_testing.QUEUE_DB_PATH, f"{agent_id}.db")) # Remove entry from client keys client_keys.delete_client_keys_entry(agent_id) diff --git a/deps/wazuh_testing/wazuh_testing/modules/__init__.py b/deps/wazuh_testing/wazuh_testing/modules/__init__.py index b6e9a50d28..d9a289651c 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/modules/__init__.py @@ -1,33 +1,37 @@ -# Copyright (C) 2015-2022, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -''' -The purpose of this file is to contain all the variables necessary for Wazuh in order to be easier -to maintain if one of them changes in the future. -''' -import pytest - -# Services Variables -WAZUH_SERVICES_STOPPED = 'stopped' -WAZUH_SERVICE_PREFIX = 'wazuh' -WAZUH_SERVICES_STOP = 'stop' -WAZUH_SERVICES_START = 'start' - -# Configurations -DATA = 'data' -WAZUH_LOG_MONITOR = 'wazuh_log_monitor' - -# Marks Executions - -TIER0 = pytest.mark.tier(level=0) -TIER1 = pytest.mark.tier(level=1) -TIER2 = pytest.mark.tier(level=2) - -WINDOWS = pytest.mark.win32 -LINUX = pytest.mark.linux -MACOS = pytest.mark.darwin -SOLARIS = pytest.mark.sunos5 - -AGENT = pytest.mark.agent -SERVER = pytest.mark.server +# Copyright (C) 2015-2023, Wazuh Inc. +# Created by Wazuh, Inc. . +# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +''' +The purpose of this file is to contain all the variables necessary for Wazuh in order to be easier +to maintain if one of them changes in the future. +''' +import pytest + +WAZUH_SERVICE_PREFIX = 'wazuh' +WAZUH_SERVICES_STOPPED = 'stopped' +WAZUH_SERVICES_STOP = 'stop' +WAZUH_SERVICES_START = 'start' + +# Configurations +DATA = 'data' +WAZUH_LOG_MONITOR = 'wazuh_log_monitor' + +# Marks Executions + +TIER0 = pytest.mark.tier(level=0) +TIER1 = pytest.mark.tier(level=1) +TIER2 = pytest.mark.tier(level=2) + +WINDOWS = pytest.mark.win32 +LINUX = pytest.mark.linux +MACOS = pytest.mark.darwin +SOLARIS = pytest.mark.sunos5 + +AGENT = pytest.mark.agent +SERVER = pytest.mark.server + +# Local internal options +WINDOWS_DEBUG = 'windows.debug' +SYSCHECK_DEBUG = 'syscheck.debug' +VERBOSE_DEBUG_OUTPUT = 2 diff --git a/deps/wazuh_testing/wazuh_testing/modules/authd/__init__.py b/deps/wazuh_testing/wazuh_testing/modules/authd/__init__.py new file mode 100644 index 0000000000..898034a235 --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/modules/authd/__init__.py @@ -0,0 +1,8 @@ +''' +copyright: Copyright (C) 2015-2023, Wazuh Inc. + Created by Wazuh, Inc. . + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 +''' + +# Variables +AUTHD_PREFIX = r'.*wazuh-authd.*' diff --git a/deps/wazuh_testing/wazuh_testing/modules/authd/event_monitor.py b/deps/wazuh_testing/wazuh_testing/modules/authd/event_monitor.py new file mode 100644 index 0000000000..7a116b9053 --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/modules/authd/event_monitor.py @@ -0,0 +1,51 @@ +''' +copyright: Copyright (C) 2015-2023, Wazuh Inc. + Created by Wazuh, Inc. . + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 +''' +import re + +from wazuh_testing import T_30 +from wazuh_testing.modules.authd import AUTHD_PREFIX +from wazuh_testing.tools import LOG_FILE_PATH +from wazuh_testing.tools.monitoring import FileMonitor + + +def make_authd_callback(pattern, prefix=AUTHD_PREFIX): + """Create a callback function from a text pattern. + + It already contains the authd prefix. + + Args: + pattern (str): String to match on the log. + prefix (str): regular expression used as prefix before the pattern. + + Returns: + lambda: function that returns if there's a match in the file + + Examples: + >>> callback_empty_pass_error = make_authd_callback("ERROR: Empty password provided.") + """ + pattern = r'\s+'.join(pattern.split()) + regex = re.compile(r'{}{}'.format(prefix, pattern)) + + return lambda line: regex.match(line) is not None + + +def check_authd_event(file_monitor=None, callback='', error_message=None, update_position=True, + prefix=AUTHD_PREFIX, timeout=T_30, accum_results=1, file_to_monitor=LOG_FILE_PATH): + """Check if an authd event occurs. + Args: + file_monitor (FileMonitor): FileMonitor object to monitor the file content. + callback (str): log regex to check in Wazuh log + error_message (str): error message to show in case of expected event does not occur + update_position (boolean): filter configuration parameter to search in Wazuh log + timeout (str): timeout to check the event in Wazuh log + accum_results (int): Accumulation of matches. + """ + file_monitor = FileMonitor(file_to_monitor) if file_monitor is None else file_monitor + error_message = f"Could not find this event in {file_to_monitor}: {callback}" if not error_message \ + else error_message + + file_monitor.start(timeout=timeout, update_position=update_position, accum_results=accum_results, + callback=make_authd_callback(callback, prefix), error_message=error_message) diff --git a/deps/wazuh_testing/wazuh_testing/modules/fim/__init__.py b/deps/wazuh_testing/wazuh_testing/modules/fim/__init__.py index eeeaf91ff0..842cde0c5a 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/fim/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/modules/fim/__init__.py @@ -8,18 +8,153 @@ ''' import sys +import os +from wazuh_testing.tools import PREFIX + +if sys.platform == 'win32': + import win32con + import win32api + # Variables +SIZE_LIMIT_CONFIGURED_VALUE = 10240 + +if sys.platform == 'win32': + + registry_parser = { + 'HKEY_CLASSES_ROOT': win32con.HKEY_CLASSES_ROOT, + 'HKEY_CURRENT_USER': win32con.HKEY_CURRENT_USER, + 'HKEY_LOCAL_MACHINE': win32con.HKEY_LOCAL_MACHINE, + 'HKEY_USERS': win32con.HKEY_USERS, + 'HKEY_CURRENT_CONFIG': win32con.HKEY_CURRENT_CONFIG + } + + registry_class_name = { + win32con.HKEY_CLASSES_ROOT: 'HKEY_CLASSES_ROOT', + win32con.HKEY_CURRENT_USER: 'HKEY_CURRENT_USER', + win32con.HKEY_LOCAL_MACHINE: 'HKEY_LOCAL_MACHINE', + win32con.HKEY_USERS: 'HKEY_USERS', + win32con.HKEY_CURRENT_CONFIG: 'HKEY_CURRENT_CONFIG' + } + + registry_value_type = { + win32con.REG_NONE: 'REG_NONE', + win32con.REG_SZ: 'REG_SZ', + win32con.REG_EXPAND_SZ: 'REG_EXPAND_SZ', + win32con.REG_BINARY: 'REG_BINARY', + win32con.REG_DWORD: 'REG_DWORD', + win32con.REG_DWORD_BIG_ENDIAN: 'REG_DWORD_BIG_ENDIAN', + win32con.REG_LINK: 'REG_LINK', + win32con.REG_MULTI_SZ: 'REG_MULTI_SZ', + win32con.REG_RESOURCE_LIST: 'REG_RESOURCE_LIST', + win32con.REG_FULL_RESOURCE_DESCRIPTOR: 'REG_FULL_RESOURCE_DESCRIPTOR', + win32con.REG_RESOURCE_REQUIREMENTS_LIST: 'REG_RESOURCE_REQUIREMENTS_LIST', + win32con.REG_QWORD: 'REG_QWORD' + } + + REG_NONE = win32con.REG_NONE + REG_SZ = win32con.REG_SZ + REG_EXPAND_SZ = win32con.REG_EXPAND_SZ + REG_BINARY = win32con.REG_BINARY + REG_DWORD = win32con.REG_DWORD + REG_DWORD_BIG_ENDIAN = win32con.REG_DWORD_BIG_ENDIAN + REG_LINK = win32con.REG_LINK + REG_MULTI_SZ = win32con.REG_MULTI_SZ + REG_RESOURCE_LIST = win32con.REG_RESOURCE_LIST + REG_FULL_RESOURCE_DESCRIPTOR = win32con.REG_FULL_RESOURCE_DESCRIPTOR + REG_RESOURCE_REQUIREMENTS_LIST = win32con.REG_RESOURCE_REQUIREMENTS_LIST + REG_QWORD = win32con.REG_QWORD + KEY_WOW64_32KEY = win32con.KEY_WOW64_32KEY + KEY_WOW64_64KEY = win32con.KEY_WOW64_64KEY + KEY_ALL_ACCESS = win32con.KEY_ALL_ACCESS + RegOpenKeyEx = win32api.RegOpenKeyEx + RegCloseKey = win32api.RegCloseKey +else: + + registry_parser = {} + registry_class_name = {} + registry_value_type = {} + RegOpenKeyEx = 0 + RegCloseKey = 0 + KEY_WOW64_32KEY = 0 + KEY_WOW64_64KEY = 0 + REG_NONE = 0 + REG_SZ = 0 + REG_EXPAND_SZ = 0 + REG_BINARY = 0 + REG_DWORD = 0 + REG_DWORD_BIG_ENDIAN = 0 + REG_LINK = 0 + REG_MULTI_SZ = 0 + REG_RESOURCE_LIST = 0 + REG_FULL_RESOURCE_DESCRIPTOR = 0 + REG_RESOURCE_REQUIREMENTS_LIST = 0 + REG_QWORD = 0 + KEY_ALL_ACCESS = 0 + + +# Check Types +CHECK_ALL = 'check_all' +CHECK_SUM = 'check_sum' +CHECK_SHA1SUM = 'check_sha1sum' +CHECK_MD5SUM = 'check_md5sum' +CHECK_SHA256SUM = 'check_sha256sum' +CHECK_SIZE = 'check_size' +CHECK_OWNER = 'check_owner' +CHECK_GROUP = 'check_group' +CHECK_PERM = 'check_perm' +CHECK_ATTRS = 'check_attrs' +CHECK_MTIME = 'check_mtime' +CHECK_INODE = 'check_inode' +CHECK_TYPE = 'check_type' + +REQUIRED_ATTRIBUTES = { + CHECK_SHA1SUM: 'hash_sha1', + CHECK_MD5SUM: 'hash_md5', + CHECK_SHA256SUM: 'hash_sha256', + CHECK_SIZE: 'size', + CHECK_OWNER: ['uid', 'user_name'], + CHECK_GROUP: ['gid', 'group_name'], + CHECK_PERM: 'perm', + CHECK_ATTRS: 'attributes', + CHECK_MTIME: 'mtime', + CHECK_INODE: 'inode', + CHECK_ALL: {CHECK_SHA256SUM, CHECK_SHA1SUM, CHECK_MD5SUM, CHECK_SIZE, CHECK_OWNER, + CHECK_GROUP, CHECK_PERM, CHECK_ATTRS, CHECK_MTIME, CHECK_INODE}, + CHECK_SUM: {CHECK_SHA1SUM, CHECK_SHA256SUM, CHECK_MD5SUM} +} + +REQUIRED_REG_KEY_ATTRIBUTES = { + CHECK_OWNER: ['uid', 'user_name'], + CHECK_GROUP: ['gid', 'group_name'], + CHECK_PERM: 'perm', + CHECK_MTIME: 'mtime', + CHECK_ALL: {CHECK_OWNER, CHECK_GROUP, CHECK_PERM, CHECK_MTIME} +} + +REQUIRED_REG_VALUE_ATTRIBUTES = { + CHECK_SHA1SUM: 'hash_sha1', + CHECK_MD5SUM: 'hash_md5', + CHECK_SHA256SUM: 'hash_sha256', + CHECK_SIZE: 'size', + CHECK_TYPE: 'value_type', + CHECK_ALL: {CHECK_SHA256SUM, CHECK_SHA1SUM, CHECK_MD5SUM, CHECK_SIZE, CHECK_TYPE}, + CHECK_SUM: {CHECK_SHA1SUM, CHECK_SHA256SUM, CHECK_MD5SUM} +} # Key variables -WINDOWS_HKEY_LOCAL_MACHINE = 'HKEY_LOCAL_MACHINE' MONITORED_KEY = 'SOFTWARE\\random_key' +MONITORED_KEY_2 = 'SOFTWARE\\Classes\\random_key_2' +MONITORED_KEY_3 = 'SOFTWARE\\Classes\\random_key_3' + +WINDOWS_HKEY_LOCAL_MACHINE = 'HKEY_LOCAL_MACHINE' WINDOWS_REGISTRY = 'WINDOWS_REGISTRY' # Value key SYNC_INTERVAL = 'SYNC_INTERVAL' -SYNC_INTERVAL_VALUE = MAX_EVENTS_VALUE = 20 +SYNC_INTERVAL_VALUE = 30 +MAX_EVENTS_VALUE = 20 # Folders variables @@ -27,22 +162,23 @@ TEST_DIRECTORIES = 'TEST_DIRECTORIES' TEST_REGISTRIES = 'TEST_REGISTRIES' +MONITORED_DIR_1 = os.path.join(PREFIX, TEST_DIR_1) # Syscheck attributes REPORT_CHANGES = 'report_changes' -DIFF_SIZE_LIMIT = 'diff_size_limit' FILE_SIZE_ENABLED = 'FILE_SIZE_ENABLED' FILE_SIZE_LIMIT = 'FILE_SIZE_LIMIT' DISK_QUOTA_ENABLED = 'DISK_QUOTA_ENABLED' DISK_QUOTA_LIMIT = 'DISK_QUOTA_LIMIT' +DIFF_SIZE_LIMIT = 'diff_size_limit' # Syscheck values DIFF_LIMIT_VALUE = 2 DIFF_DEFAULT_LIMIT_VALUE = 51200 -# FIM modules -SCHEDULE_MODE = 'scheduled' +# FIM modes +SCHEDULED_MODE = 'scheduled' REALTIME_MODE = 'realtime' WHODATA_MODE = 'whodata' @@ -50,27 +186,14 @@ # Yaml Configuration YAML_CONF_REGISTRY_RESPONSE = 'wazuh_conf_registry_responses_win32.yaml' YAML_CONF_SYNC_WIN32 = 'wazuh_sync_conf_win32.yaml' -YAML_CONF_DIFF = 'wazuh_conf_diff.yaml' YAML_CONF_MAX_EPS_SYNC = 'wazuh_sync_conf_max_eps.yaml' + # Synchronization options SYNCHRONIZATION_ENABLED = 'SYNCHRONIZATION_ENABLED' SYNCHRONIZATION_REGISTRY_ENABLED = 'SYNCHRONIZATION_REGISTRY_ENABLED' -# Callbacks message -CB_INTEGRITY_CONTROL_MESSAGE = r'.*Sending integrity control message: (.+)$' -CB_REGISTRY_DBSYNC_NO_DATA = r'.*#!-fim_registry dbsync no_data (.+)' -CB_MAXIMUM_FILE_SIZE = r'.*Maximum file size limit to generate diff information configured to \'(\d+) KB\'.*' -CB_AGENT_CONNECT = r'.* Connected to the server .*' - -# Error message -ERR_MSG_MAXIMUM_FILE_SIZE = 'Did not receive expected "Maximum file size limit configured to \'... KB\'..." event' -ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE = 'Wrong value for diff_size_limit' -ERR_MSG_AGENT_DISCONNECT = 'Agent couldn\'t connect to server.' -ERR_MSG_INTEGRITY_CONTROL_MSG = 'Didn\'t receive control message(integrity_check_global)' - # Setting Local_internal_option file - if sys.platform == 'win32': FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS = { 'windows.debug': '2', diff --git a/deps/wazuh_testing/wazuh_testing/modules/fim/classes.py b/deps/wazuh_testing/wazuh_testing/modules/fim/classes.py new file mode 100644 index 0000000000..b35293735f --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/modules/fim/classes.py @@ -0,0 +1,566 @@ +# Copyright (C) 2015-2023, Wazuh Inc. +# Created by Wazuh, Inc. . +# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +import os +import sys +import subprocess +import json +from jsonschema import validate +from collections import Counter +from wazuh_testing import global_parameters, logger, WAZUH_TESTING_DATA_PATH +from wazuh_testing.modules.fim import REQUIRED_ATTRIBUTES, REQUIRED_REG_KEY_ATTRIBUTES, REQUIRED_REG_VALUE_ATTRIBUTES +from wazuh_testing.modules.fim.event_monitor import callback_detect_event + +if sys.platform == 'linux2' or sys.platform == 'linux': + from jq import jq + +_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') + + +def validate_event(event, checks=None, mode=None): + """Check if event is properly formatted according to some checks. + + Args: + event (dict): represents an event generated by syscheckd. + checks (:obj:`set`, optional): set of XML CHECK_* options. Default `{CHECK_ALL}` + mode (:obj:`str`, optional): represents the FIM mode expected for the event to validate. + """ + + def get_required_attributes(check_attributes, result=None): + result = set() if result is None else result + for check in check_attributes: + mapped = REQUIRED_ATTRIBUTES[check] + if isinstance(mapped, str): + result |= {mapped} + elif isinstance(mapped, list): + result |= set(mapped) + elif isinstance(mapped, set): + result |= get_required_attributes(mapped, result=result) + return result + + json_file = 'syscheck_event_windows.json' if sys.platform == "win32" else 'syscheck_event.json' + with open(os.path.join(WAZUH_TESTING_DATA_PATH, json_file), 'r') as f: + schema = json.load(f) + validate(schema=schema, instance=event) + + # Check FIM mode + mode = global_parameters.current_configuration['metadata']['fim_mode'] if mode is None else mode.replace('-', '') + assert (event['data']['mode']).replace('-', '') == mode, f"The event's FIM mode was '{event['data']['mode']}' \ + but was expected to be '{mode}'" + + # Check attributes + if checks: + attributes = event['data']['attributes'].keys() - {'type', 'checksum'} + + required_attributes = get_required_attributes(checks) + required_attributes -= get_required_attributes({CHECK_GROUP}) if sys.platform == "win32" else {'attributes'} + + intersection = attributes ^ required_attributes + intersection_debug = "Event attributes are: " + str(attributes) + intersection_debug += "\nRequired Attributes are: " + str(required_attributes) + intersection_debug += "\nIntersection is: " + str(intersection) + assert (intersection == set()), f'Attributes and required_attributes are not the same. ' + intersection_debug + + # Check add file event + if event['data']['type'] == 'added': + assert 'old_attributes' not in event['data'] and 'changed_attributes' not in event['data'] + # Check modify file event + if event['data']['type'] == 'modified': + assert 'old_attributes' in event['data'] and 'changed_attributes' in event['data'] + + old_attributes = event['data']['old_attributes'].keys() - {'type', 'checksum'} + old_intersection = old_attributes ^ required_attributes + old_intersection_debug = "Event attributes are: " + str(old_attributes) + old_intersection_debug += "\nRequired Attributes are: " + str(required_attributes) + old_intersection_debug += "\nIntersection is: " + str(old_intersection) + assert (old_intersection == set()), (f'Old_attributes and required_attributes are not the same. ' + + old_intersection_debug) + + +def validate_registry_event(event, checks=None, mode=None, is_key=True): + """Check if event is properly formatted according to some checks. + + Args: + event (dict): represents an event generated by syscheckd. + checks (:obj:`set`, optional): set of XML CHECK_* options. Default `{CHECK_ALL}` + mode (:obj:`str`, optional): represents the FIM mode expected for the event to validate. + is_key(Boolean): define if event to validate is a registry_key (True) or registry_value(False). Default True + """ + + def get_required_attributes(check_attributes, result=None): + result = set() if result is None else result + for check in check_attributes: + mapped = REQUIRED_REG_KEY_ATTRIBUTES[check] if is_key else REQUIRED_REG_VALUE_ATTRIBUTES[check] + + if isinstance(mapped, str): + result |= {mapped} + elif isinstance(mapped, list): + result |= set(mapped) + elif isinstance(mapped, set): + result |= get_required_attributes(mapped, result=result) + + return result + + json_file = 'syscheck_event_windows.json' if sys.platform == "win32" else 'syscheck_event.json' + with open(os.path.join(_data_path, json_file), 'r') as f: + schema = json.load(f) + + validate(schema=schema, instance=event) + + # Check FIM mode + mode = global_parameters.current_configuration['metadata']['fim_mode'] if mode is None else mode.replace('-', '') + assert (event['data']['mode']).replace('-', '') == mode, f"The event's FIM mode was '{event['data']['mode']}' \ + but was expected to be '{mode}'" + + # Check attributes + if checks: + attributes = event['data']['attributes'].keys() - {'type', 'checksum'} + + required_attributes = get_required_attributes(checks) + + intersection = attributes ^ required_attributes + intersection_debug = "Event attributes are: " + str(attributes) + intersection_debug += "\nRequired Attributes are: " + str(required_attributes) + intersection_debug += "\nIntersection is: " + str(intersection) + + assert (intersection == set()), f'Attributes and required_attributes are not the same. ' + intersection_debug + + # Check add file event + if event['data']['type'] == 'added': + assert 'old_attributes' not in event['data'] and 'changed_attributes' not in event['data'] + + # Check modify file event + if event['data']['type'] == 'modified': + assert 'old_attributes' in event['data'] and 'changed_attributes' in event['data'] + + old_attributes = event['data']['old_attributes'].keys() - {'type', 'checksum'} + old_intersection = old_attributes ^ required_attributes + old_intersection_debug = "Event attributes are: " + str(old_attributes) + old_intersection_debug += "\nRequired Attributes are: " + str(required_attributes) + old_intersection_debug += "\nIntersection is: " + str(old_intersection) + + assert (old_intersection == set()), (f'Old_attributes and required_attributes are not the same. ' + + old_intersection_debug) + + +class CustomValidator: + """Enable using user-defined validators over the events when validating them with EventChecker""" + + def __init__(self, validators_after_create=None, validators_after_update=None, + validators_after_delete=None, validators_after_cud=None): + self.validators_create = validators_after_create + self.validators_update = validators_after_update + self.validators_delete = validators_after_delete + self.validators_cud = validators_after_cud + + def validate_after_create(self, events): + """Custom validators to be applied by default when the event_type is 'added'. + + Args: + events (list): list of events to be validated. + """ + if self.validators_create is not None: + for event in events: + for validator in self.validators_create: + validator(event) + + def validate_after_update(self, events): + """Custom validators to be applied by default when the event_type is 'modified'. + + Args: + events (list): list of events to be validated. + """ + if self.validators_update is not None: + for event in events: + for validator in self.validators_update: + validator(event) + + def validate_after_delete(self, events): + """Custom validators to be applied by default when the event_type is 'deleted'. + + Args: + events (list): list of events to be validated. + """ + if self.validators_delete is not None: + for event in events: + for validator in self.validators_delete: + validator(event) + + def validate_after_cud(self, events): + """Custom validators to be applied always by default. + + Args: + events (list): list of events to be validated. + """ + if self.validators_cud is not None: + for event in events: + for validator in self.validators_cud: + validator(event) + + +class EventChecker: + """Utility to allow fetch events and validate them.""" + + def __init__(self, log_monitor, folder, file_list=['testfile0'], options=None, custom_validator=None, encoding=None, + callback=callback_detect_event): + self.log_monitor = log_monitor + self.folder = folder + self.file_list = file_list + self.custom_validator = custom_validator + self.options = options + self.encoding = encoding + self.events = None + self.callback = callback + + def fetch_and_check(self, event_type, min_timeout=1, triggers_event=True, extra_timeout=0, event_mode=None, + escaped=False): + """Call both 'fetch_events' and 'check_events'. + + Args: + event_type (str): Expected type of the raised event {'added', 'modified', 'deleted'}. + event_mode (str, optional): Specifies the scan mode to check in the events + min_timeout (int, optional): seconds to wait until an event is raised when trying to fetch. Defaults `1` + triggers_event (boolean, optional): True if the event should be raised. False otherwise. Defaults `True` + extra_timeout (int, optional): Additional time to wait after the min_timeout + """ + num_files = len(self.file_list) + error_msg = "TimeoutError was raised because " + error_msg += str(num_files) if num_files > 1 else "a single" + error_msg += " '" + str(event_type) + "' " + error_msg += "events were " if num_files > 1 else "event was " + error_msg += "expected for " + str(self._get_file_list()) + error_msg += " but were not detected." if len(self.file_list) > 1 else " but was not detected." + + self.events = self.fetch_events(min_timeout, triggers_event, extra_timeout, error_message=error_msg) + self.check_events(event_type, mode=event_mode, escaped=escaped) + + def fetch_events(self, min_timeout=1, triggers_event=True, extra_timeout=0, error_message=''): + """Try to fetch events on a given log monitor. Will return a list with the events detected. + + Args: + min_timeout (int, optional): seconds to wait until an event is raised when trying to fetch. Defaults `1` + triggers_event (boolean, optional): True if the event should be raised. False otherwise. Defaults `True` + extra_timeout (int, optional): Additional time to wait after the min_timeout + error_message (str): Message to explain a possible timeout error + """ + + def clean_results(event_list): + """Iterate the event_list provided and check if the 'modified' events contained should be merged to fix + whodata's bug that raise more than one modification event when a file is modified. If some 'modified' event + shares 'path' and 'timestamp' we assume that belongs to the same modification. + """ + if not isinstance(event_list, list): + return event_list + result_list = list() + previous = None + while len(event_list) > 0: + current = event_list.pop(0) + if current['data']['type'] == "modified": + if not previous: + previous = current + elif (previous['data']['path'] == current['data']['path'] and + current['data']['timestamp'] in [previous['data']['timestamp'], + previous['data']['timestamp'] + 1]): + previous['data']['changed_attributes'] = list(set(previous['data']['changed_attributes'] + + current['data']['changed_attributes'])) + previous['data']['attributes'] = current['data']['attributes'] + else: + result_list.append(previous) + previous = current + else: + result_list.append(current) + if previous: + result_list.append(previous) + return result_list + + try: + result = self.log_monitor.start(timeout=max(len(self.file_list) * 0.01, min_timeout), + callback=self.callback, + accum_results=len(self.file_list), + timeout_extra=extra_timeout, + encoding=self.encoding, + error_message=error_message).result() + assert triggers_event, f'No events should be detected.' + if extra_timeout > 0: + result = clean_results(result) + return result if isinstance(result, list) else [result] + except TimeoutError: + if triggers_event: + raise + logger.info("TimeoutError was expected and correctly caught.") + + def check_events(self, event_type, mode=None, escaped=False): + """Check and validate all events in the 'events' list. + + Args: + event_type (str): Expected type of the raised event {'added', 'modified', 'deleted'}. + mode (str, optional): Specifies the FIM scan mode to check in the events + escaped (Boolean): check if file path has to be escaped. + """ + + def validate_checkers_per_event(events, options, mode): + """Check if each event is properly formatted according to some checks. + + Args: + events (list): event list to be checked. + options (set): set of XML CHECK_* options. Default `{CHECK_ALL}` + mode (str): represents the FIM mode expected for the event to validate. + """ + for ev in events: + validate_event(ev, options, mode) + + def check_events_type(events, ev_type, file_list=['testfile0']): + event_types = Counter(filter_events(events, ".[].data.type")) + msg = f"Non expected number of events. {event_types[ev_type]} != {len(file_list)}" + assert (event_types[ev_type] == len(file_list)), msg + + def check_events_path(events, folder, file_list=['testfile0'], mode=None, escaped=False): + mode = global_parameters.current_configuration['metadata']['fim_mode'] if mode is None else mode + data_path = filter_events(events, ".[].data.path") + for file_name in file_list: + expected_path = os.path.join(folder, file_name) + if escaped: + expected_path = expected_path.replace("\\", "\\\\") + if self.encoding is not None: + for index, item in enumerate(data_path): + data_path[index] = item.encode(encoding=self.encoding) + if sys.platform == 'darwin' and self.encoding and self.encoding != 'utf-8': + logger.info(f"Not asserting {expected_path} in event.data.path. " + f'Reason: using non-utf-8 encoding in darwin.') + else: + error_msg = f"Expected data path was '{expected_path}' but event data path is '{data_path}'" + assert (expected_path in str(data_path)), error_msg + + def filter_events(events, mask): + """Returns a list of elements matching a specified mask in the events list using jq module.""" + if sys.platform in ("win32", 'sunos5', 'darwin'): + stdout = subprocess.check_output(["jq", "-r", mask], input=json.dumps(events).encode()) + return stdout.decode("utf8").strip().split(os.linesep) + else: + return jq(mask).transform(events, multiple_output=True) + + if self.events is not None: + validate_checkers_per_event(self.events, self.options, mode) + check_events_type(self.events, event_type, self.file_list) + check_events_path(self.events, self.folder, file_list=self.file_list, mode=mode, escaped=escaped) + + if self.custom_validator is not None: + self.custom_validator.validate_after_cud(self.events) + if event_type == "added": + self.custom_validator.validate_after_create(self.events) + elif event_type == "modified": + self.custom_validator.validate_after_update(self.events) + elif event_type == "deleted": + self.custom_validator.validate_after_delete(self.events) + + def _get_file_list(self): + result_list = [] + for file_name in self.file_list: + expected_file_path = os.path.join(self.folder, file_name) + expected_file_path = expected_file_path[:1].lower() + expected_file_path[1:] + result_list.append(expected_file_path) + return result_list + + +class RegistryEventChecker: + """Utility to allow fetch events and validate them.""" + + def __init__(self, log_monitor, registry_key, registry_dict=None, options=None, custom_validator=None, + encoding=None, callback=callback_detect_event, is_value=False): + self.log_monitor = log_monitor + self.registry_key = registry_key + global registry_ignore_path + registry_ignore_path = registry_key + self.registry_dict = registry_dict + self.custom_validator = custom_validator + self.options = options + self.encoding = encoding + self.events = None + self.callback = callback + self.is_value = is_value + + def __del__(self): + global registry_ignore_path + registry_ignore_path = None + + def fetch_and_check(self, event_type, min_timeout=1, triggers_event=True, extra_timeout=0): + """Call 'fetch_events', 'fetch_key_events' and 'check_events', depending on the type of event expected. + + Args: + event_type (str): Expected type of the raised event {'added', 'modified', 'deleted'}. + min_timeout (int, optional): seconds to wait until an event is raised when trying to fetch. Defaults `1` + triggers_event (boolean, optional): True if the event should be raised. False otherwise. Defaults `True` + extra_timeout (int, optional): Additional time to wait after the min_timeout + """ + assert event_type in ['added', 'modified', 'deleted'], f'Incorrect event type: {event_type}' + + num_elems = len(self.registry_dict) + + error_msg = "TimeoutError was raised because " + error_msg += str(num_elems) if num_elems > 1 else "a single" + error_msg += " '" + str(event_type) + "' " + error_msg += "events were " if num_elems > 1 else "event was " + error_msg += "expected for " + str(self._get_elem_list()) + error_msg += " but were not detected." if num_elems > 1 else " but was not detected." + + key_error_msg = f"TimeoutError was raised because 1 event was expected for {self.registry_key} " + key_error_msg += 'but was not detected.' + + if event_type == 'modified' or self.is_value: + self.events = self.fetch_events(min_timeout, triggers_event, extra_timeout, error_message=error_msg) + self.check_events(event_type) + elif event_type == 'added': + self.events = self.fetch_events(min_timeout, triggers_event, extra_timeout, error_message=error_msg) + self.check_events(event_type) + elif event_type == 'deleted': + self.events = self.fetch_events(min_timeout, triggers_event, extra_timeout, error_message=error_msg) + self.check_events(event_type) + + def fetch_events(self, min_timeout=1, triggers_event=True, extra_timeout=0, error_message=''): + """ Gets as much events that match the callback as possible in the given time. + + Args: + min_timeout (int, optional): seconds to wait until an event is raised when trying to fetch. Defaults `1` + triggers_event (boolean, optional): True if the event should be raised. False otherwise. Defaults `True` + extra_timeout (int, optional): Additional time to wait after the min_timeout + error_message(str, optional): Message that will be printed by the FileMonitor in case of error. + """ + timeout_per_registry_estimation = 0.01 + try: + result = self.log_monitor.start(timeout=max((len(self.registry_dict)) * timeout_per_registry_estimation, + min_timeout), callback=self.callback, accum_results=len(self.registry_dict), + timeout_extra=extra_timeout, encoding=self.encoding, + error_message=error_message).result() + + assert triggers_event, 'No events should be detected.' + return result if isinstance(result, list) else [result] + except TimeoutError: + if triggers_event: + raise + logger.info("TimeoutError was expected and correctly caught.") + + def check_events(self, event_type, mode=None): + """Check and validate all events in the 'events' list. + + Args: + event_type (str): Expected type of the raised event {'added', 'modified', 'deleted'}. + mode (str): expected mode of the raised event. + """ + + def validate_checkers_per_event(events, options, mode): + """Check if each event is properly formatted according to some checks. + + Args: + events (list): event list to be checked. + options (set): set of XML CHECK_* options. Default `{CHECK_ALL}` + mode (str): represents the FIM mode expected for the event to validate. + """ + for ev in events: + if self.is_value: + validate_registry_event(ev, options, mode, is_key=False) + else: + validate_registry_event(ev, options, mode , is_key=False) + + def check_events_type(events, ev_type, reg_list=['testkey0']): + """Checks the event type of each events in a list. + + Args: + events (list): event list to be checked. + ev_type (str): type of expected event. + reg_list (list): list of keys that are being checked. + """ + event_types = Counter(filter_events(events, ".[].data.type")) + + assert (event_types[ev_type] == len(reg_list)), f'Non expected number of \ + events. {event_types[ev_type]} != {len(reg_list)}' + + def check_events_key_path(events, registry_key, reg_list=['testkey0'], mode=None): + """Checks the path for a registry_key event in a list. + + Args: + events (list): event list to be checked. + registry_key (str): path to the key being checked. + reg_list (list, optional): list of keys that are being checked. + mode(str, optional): defines the type of FIM monitoring mode configured + """ + mode = global_parameters.current_configuration['metadata']['fim_mode'] if mode is None else mode + key_path = filter_events(events, ".[].data.path") + + for reg in reg_list: + expected_path = os.path.join(registry_key, reg) + + if self.encoding is not None: + for index, item in enumerate(key_path): + key_path[index] = item.encode(encoding=self.encoding) + + error_msg = f"Expected key path was '{expected_path}' but event key path is '{key_path}'" + assert (expected_path in key_path), error_msg + + def check_events_registry_value(events, key, value_list=['testvalue0'], mode=None): + """Checks the path for a registry_value event in a list. + + Args: + events (list): event list to be checked. + key (str): path to the key being checked where the value has been added. + value_list (list, optional): list of values that are being checked. + mode(str, optional): defines the type of FIM monitoring mode configured + """ + mode = global_parameters.current_configuration['metadata']['fim_mode'] if mode is None else mode + key_path = filter_events(events, ".[].data.path") + value_name = filter_events(events, ".[].data.value_name") + + for value in value_list: + error_msg = f"Expected value name was '{value}' but event value name is '{value_name}'" + assert (value in value_name), error_msg + + error_msg = f"Expected key path was '{key}' but event key path is '{key_path}'" + assert (key in key_path), error_msg + + def filter_events(events, mask): + """Returns a list of elements matching a specified mask in the events list using jq module. + + Args: + events (list): event list to be checked. + mask(str): mask to be used to check events using jq module + """ + if sys.platform in ("win32", 'sunos5', 'darwin'): + stdout = subprocess.check_output(["jq", "-r", mask], input=json.dumps(events).encode()) + + return stdout.decode("utf8").strip().split(os.linesep) + else: + return jq(mask).transform(events, multiple_output=True) + + if self.events is not None: + validate_checkers_per_event(self.events, self.options, mode) + + if self.is_value: + check_events_type(self.events, event_type, self.registry_dict) + check_events_registry_value(self.events, self.registry_key, value_list=self.registry_dict, + mode=mode) + else: + check_events_type(self.events, event_type, self.registry_dict) + check_events_key_path(self.events, self.registry_key, reg_list=self.registry_dict, mode=mode) + + if self.custom_validator is not None: + self.custom_validator.validate_after_cud(self.events) + + if event_type == "added": + self.custom_validator.validate_after_create(self.events) + elif event_type == "modified": + self.custom_validator.validate_after_update(self.events) + elif event_type == "deleted": + self.custom_validator.validate_after_delete(self.events) + + def _get_elem_list(self): + """returns the elements in the registry_dict variable as a list""" + result_list = [] + + for elem_name in self.registry_dict: + if elem_name in self.registry_key: + continue + + expected_elem_path = os.path.join(self.registry_key, elem_name) + result_list.append(expected_elem_path) + + return result_list diff --git a/deps/wazuh_testing/wazuh_testing/modules/fim/event_monitor.py b/deps/wazuh_testing/wazuh_testing/modules/fim/event_monitor.py index e2549acffd..4eb58724e5 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/fim/event_monitor.py +++ b/deps/wazuh_testing/wazuh_testing/modules/fim/event_monitor.py @@ -5,8 +5,166 @@ import re import json +from sys import platform from datetime import datetime -from wazuh_testing.modules.fim import CB_AGENT_CONNECT, CB_INTEGRITY_CONTROL_MESSAGE +from wazuh_testing import LOG_FILE_PATH, logger, T_60 +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback + + +# Variables +file_monitor = FileMonitor(LOG_FILE_PATH) + + +# Callbacks messages +CB_DETECT_FIM_EVENT = r".*Sending FIM event: (.+)$" +CB_FOLDERS_MONITORED_REALTIME = r'.*Folders monitored with real-time engine: (\d+)' +CB_INVALID_CONFIG_VALUE = r".*Invalid value for element '(.*)': (.*)." +CB_INTEGRITY_CONTROL_MESSAGE = r".*Sending integrity control message: (.+)$" +CB_MAXIMUM_FILE_SIZE = r'.*Maximum file size limit to generate diff information configured to \'(\d+) KB\'.*' +CB_AGENT_CONNECT = r'.* Connected to the server .*' +CB_INODE_ENTRIES_PATH_COUNT = r".*Fim inode entries: '(\d+)', path count: '(\d+)'" +CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE = r".*registry_value.*Couldn't insert ('.*') entry into DB. The DB is full.*" +CB_DATABASE_FULL_COULD_NOT_INSERT_KEY = r".*registry_key.*Couldn't insert ('.*') entry into DB. The DB is full.*" +CB_COUNT_REGISTRY_ENTRIES = r".*Fim registry entries count: '(\d+)'" +CB_COUNT_REGISTRY_VALUE_ENTRIES = r".*Fim registry values entries count: '(\d+)'" +CB_REGISTRY_DBSYNC_NO_DATA = r".*fim_registry_(.*) dbsync no_data (.*)'" +CB_REGISTRY_LIMIT_CAPACITY = r".*Registry database is (\d+)% full." +CB_REGISTRY_DB_BACK_TO_NORMAL = r".*(The registry database status returns to normal)." +CB_REGISTRY_LIMIT_VALUE = r".*Maximum number of registry values to be monitored: '(\d+)'" +CB_FILE_LIMIT_CAPACITY = r".*File database is (\d+)% full." +CB_FILE_LIMIT_BACK_TO_NORMAL = r".*(Sending DB back to normal alert)." +CB_FIM_ENTRIES_COUNT = r".*Fim file entries count: '(\d+)'" +CB_FILE_LIMIT_VALUE = r".*Maximum number of files to be monitored: '(\d+)'" +CB_FILE_LIMIT_DISABLED = r".*(No limit set) to maximum number of file entries to be monitored" +CB_PATH_MONITORED_REALTIME = r".*Directory added for real time monitoring: (.*)" +CB_PATH_MONITORED_WHODATA = r".*Added audit rule for monitoring directory: (.*)" +CB_PATH_MONITORED_WHODATA_WINDOWS = r".*Setting up SACL for (.*)" +CB_SYNC_SKIPPED = r".*Sync still in progress. Skipped next sync and increased interval.*'(\d+)s'" +CB_SYNC_INTERVAL_RESET = r".*Previous sync was successful. Sync interval is reset to: '(\d+)s'" +CB_IGNORING_DUE_TO_SREGEX = r".*?Ignoring path '(.*)' due to sregex '(.*)'.*" +CB_IGNORING_DUE_TO_PATTERN = r".*?Ignoring path '(.*)' due to pattern '(.*)'.*" +CB_MAXIMUM_FILE_SIZE = r'.*Maximum file size limit to generate diff information configured to \'(\d+) KB\'.*' +CB_AGENT_CONNECT = r'.* Connected to the server .*' +CB_REALTIME_WHODATA_ENGINE_STARTED = r'.*File integrity monitoring (real-time Whodata) engine started.*' +CB_DISK_QUOTA_LIMIT_CONFIGURED_VALUE = r'.*Maximum disk quota size limit configured to \'(\d+) KB\'.*' +CB_FILE_EXCEEDS_DISK_QUOTA = r'.*The (.*) of the file size \'(.*)\' exceeds the disk_quota.*' +CB_FILE_SIZE_LIMIT_REACHED = r'.*File \'(.*)\' is too big for configured maximum size to perform diff operation\.' +CB_DIFF_FOLDER_DELETED = r'.*Folder \'(.*)\' has been deleted.*' +CB_FIM_PATH_CONVERTED = r".*fim_adjust_path.*Convert '(.*) to '(.*)' to process the FIM events." +CB_STARTING_WINDOWS_AUDIT = r'.*state_checker.*(Starting check of Windows Audit Policies and SACLs)' +CB_SWITCHING_DIRECTORIES_TO_REALTIME = r'.*state_checker.*(Audit policy change detected.\ + Switching directories to realtime)' +CB_RECIEVED_EVENT_4719 = r'.*win_whodata.*(Event 4719).*Switching directories to realtime' + +# Error message +ERR_MSG_REALTIME_FOLDERS_EVENT = 'Did not receive expected "Folders monitored with real-time engine" event' +ERR_MSG_WHODATA_ENGINE_EVENT = 'Did not receive expected "real-time Whodata engine started" event' +ERR_MSG_INVALID_CONFIG_VALUE = 'Did not receive expected "Invalid value for element" event' +ERR_MSG_AGENT_DISCONNECT = 'Agent couldn\'t connect to server.' +ERR_MSG_INTEGRITY_CONTROL_MSG = 'Didn\'t receive control message(integrity_check_global)' +ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT = 'Did not receive expected "DEBUG: ...: database is ...% full" alert' +ERR_MSG_WRONG_CAPACITY_LOG_DB_LIMIT = 'Wrong capacity log for DB file_limit' +ERR_MSG_DB_BACK_TO_NORMAL = 'Did not receive expected "DEBUG: ... database status returns to normal." event' +ERR_MSG_DATABASE_FULL_ALERT = 'Did not receive expected "DEBUG: ...: Registry database is 100% full" alert' +ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL = 'Wrong value for full database alert.' +ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT = 'Did not receive expected "DEBUG: ...: Couldn\'t insert \'...\' entry \ + into DB. The DB is full, ..." event' +ERR_MSG_DATABASE_FULL_ALERT_EVENT = 'Did not receive expected "DEBUG: ...: Sending DB 100% full alert." event' +ERR_MSG_WRONG_NUMBER_OF_ENTRIES = 'Wrong number of entries counted.' +ERR_MSG_WRONG_INODE_PATH_COUNT = 'Wrong number of inodes and path count' +ERR_MSG_FIM_INODE_ENTRIES = 'Did not receive expected "Fim inode entries: ..., path count: ..." event' +ERR_MSG_FIM_REGISTRY_ENTRIES = 'Did not receive expected "Fim Registry entries count: ..." event' +ERR_MSG_FIM_REGISTRY_VALUE_ENTRIES = 'Did not receive expected "Fim Registry value entries count: ..." event' +ERR_MSG_REGISTRY_LIMIT_VALUES = 'Did not receive expected "DEBUG: ...: Maximum number of registry values to \ + be monitored: ..." event' +ERR_MSG_WRONG_REGISTRY_LIMIT_VALUE = 'Wrong value for db_value_limit registries tag.' +ERR_MSG_FILE_LIMIT_VALUES = 'Did not receive expected "DEBUG: ...: Maximum number of entries to be monitored: \ + ..." event' +ERR_MSG_WRONG_FILE_LIMIT_VALUE = 'Wrong value for file_limit.' +ERR_MSG_FILE_LIMIT_DISABLED = 'Did not receive expected "DEBUG: ...: No limit set to maximum number of entries \ + to be monitored" event' +ERR_MSG_MAXIMUM_FILE_SIZE = 'Did not receive expected "Maximum file size limit configured to \'... KB\'..." event' +ERR_MSG_NO_EVENTS_EXPECTED = 'No events should be detected.' +ERR_MSG_DELETED_EVENT_NOT_RECIEVED = 'Did not receive expected deleted event' +ERR_MSG_FIM_EVENT_NOT_RECIEVED = 'Did not receive expected "Sending FIM event: ..." event' +ERR_MSG_MONITORING_PATH = 'Did not get the expected monitoring path line' +ERR_MSG_MULTIPLE_FILES_CREATION = 'Multiple files could not be created.' +ERR_MSG_SCHEDULED_SCAN_ENDED = 'Did not recieve the expected "DEBUG: ... Sending FIM event: {type:scan_end"...} event' +ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE = 'Wrong value for diff_size_limit' +ERR_MSG_INTEGRITY_OR_WHODATA_NOT_STARTED = 'Did not receive expected "File integrity monitoring real-time Whodata \ + engine started" or "Initializing FIM Integrity Synchronization check"' +ERR_MSG_INTEGRITY_CHECK_EVENT = 'Did not receive expected "Initializing FIM Integrity Synchronization check" event' +ERR_MSG_SYNC_SKIPPED_EVENT = 'Did not recieve the expected "Sync still in progress. Skipped next sync" event' +ERR_MSG_FIM_SYNC_NOT_DETECTED = 'Did not receive expected "Initializing FIM Integrity Synchronization check" event' +ERR_MSG_SYNC_INTERVAL_RESET_EVENT = 'Did not recieve the expected "Sync interval is reset" event' +ERR_MSG_CONTENT_CHANGES_EMPTY = "content_changes is empty" +ERR_MSG_CONTENT_CHANGES_NOT_EMPTY = "content_changes isn't empty" +ERR_MSG_FOLDERS_MONITORED_REALTIME = 'Did not receive expected "Folders monitored with real-time engine..." event' +ERR_MSG_WHODATA_ENGINE_EVENT = 'Did not receive "File integrity monitoring real-time Whodata engine started" event' +ERR_MSG_FIM_EVENT_NOT_DETECTED = 'Did not receive expected "Sending FIM event: ..." event.' +ERR_MSG_SCHEDULED_SCAN_STARTED = 'Did not receive expected "File integrity monitoring scan started" event' +ERR_MSG_SCHEDULED_SCAN_ENDED = 'Did not receive expected "File integrity monitoring scan ended" event' +ERR_MSG_DISK_QUOTA_LIMIT = 'Did not receive "Maximum disk quota size limit configured to \'... KB\'." event' +ERR_MSG_FILE_LIMIT_REACHED = 'Did not receive "File ... is too big ... to perform diff operation" event.' +ERR_MSG_FOLDER_DELETED = 'Did not receive expected "Folder ... has been deleted." event.' +ERR_MSG_SACL_CONFIGURED_EVENT = 'Did not receive the expected "The SACL of will be configured" event' +ERR_MSG_WHODATA_REALTIME_MODE_CHANGE_EVENT = 'Expected "directory starts to monitored in real-time" event not received' + + +# Callback functions +def callback_detect_event(line): + """ + Detect an 'event' type FIM log. + """ + msg = CB_DETECT_FIM_EVENT + match = re.match(msg, line) + if not match: + return None + + try: + json_event = json.loads(match.group(1)) + if json_event['type'] == 'event': + return json_event + except (json.JSONDecodeError, AttributeError, KeyError) as e: + logger.warning(f"Couldn't load a log line into json object. Reason {e}") + + +def callback_detect_end_scan(line): + """ Callback that detects if a line in a log is an end of scheduled scan event + Args: + line (String): string line to be checked by callback in FileMonitor. + """ + match = re.match(CB_DETECT_FIM_EVENT, line) + if not match: + return None + try: + if json.loads(match.group(1))['type'] == 'scan_end': + return True + except (json.JSONDecodeError, AttributeError, KeyError) as e: + logger.warning(f"Couldn't load a log line into json object. Reason {e}") + + +def callback_detect_scan_start(line): + """ Callback that detects if a line in a log is the start of a scheduled scan or initial scan. + Args: + line (String): string line to be checked by callback in FileMonitor. + """ + msg = CB_DETECT_FIM_EVENT + match = re.match(msg, line) + if not match: + return None + + try: + if json.loads(match.group(1))['type'] == 'scan_start': + return True + except (json.JSONDecodeError, AttributeError, KeyError) as e: + logger.warning(f"Couldn't load a log line into json object. Reason {e}") + + +def callback_detect_synchronization(line): + if 'Executing FIM sync' in line: + return line + return None def callback_connection_message(line): @@ -23,7 +181,291 @@ def callback_detect_integrity_control_event(line): def callback_integrity_message(line): - if callback_detect_integrity_control_event(line): + if callback_detect_event(line): match = re.match(r"(\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}).*({.*?})$", line) if match: return datetime.strptime(match.group(1), '%Y/%m/%d %H:%M:%S'), json.dumps(match.group(2)) + + +def callback_detect_file_integrity_event(line): + """ Callback that detects if a line contains a file integrity event + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + event = callback_detect_integrity_control_event(line) + if event and event['component'] == 'fim_file': + return event + return None + + +def callback_value_event(line): + event = callback_detect_event(line) + + if event is None or event['data']['attributes']['type'] != 'registry_value': + return None + + return event + + +def callback_detect_registry_integrity_event(line): + """ Callback that detects if a line contains a registry integrity event for a registry_key or registry_value + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + event = callback_detect_integrity_control_event(line) + if event and event['component'] == 'fim_registry_key': + return event + if event and event['component'] == 'fim_registry_value': + return event + return None + + +def callback_detect_registry_integrity_state_event(line): + """ Callback that detects if a line contains a registry integrity event of the state type + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + event = callback_detect_registry_integrity_event(line) + if event and event['type'] == 'state': + return event['data'] + return None + + +def callback_entries_path_count(line): + if platform != 'win32': + match = re.match(CB_INODE_ENTRIES_PATH_COUNT, line) + else: + match = re.match(CB_FIM_ENTRIES_COUNT, line) + + if match: + if platform != 'win32': + return match.group(1), match.group(2) + else: + return match.group(1), None + + +def callback_num_inotify_watches(line): + """ Callback that detects if a line contains the folders monitored in realtime event + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + match = re.match(CB_FOLDERS_MONITORED_REALTIME, line) + + if match: + return match.group(1) + + +def callback_sync_start_time(line): + if callback_detect_synchronization(line): + match = re.match(r"(\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}).*", line) + if match: + return datetime.strptime(match.group(1), '%Y/%m/%d %H:%M:%S') + + +def callback_state_event_time(line): + if callback_detect_integrity_control_event(line): + match = re.match(r"(\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}).*", line) + if match: + return datetime.strptime(match.group(1), '%Y/%m/%d %H:%M:%S') + + +def callback_real_time_whodata_started(line): + """ Callback that detects if a line contains "Whodata engine started" event + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + if CB_REALTIME_WHODATA_ENGINE_STARTED in line: + return True + + +def callback_detect_registry_integrity_clear_event(line): + """ Callback that detects if a line contains a registry integrity_clear event + + Args: + line (String): string line to be checked by callback in File_Monitor. + """ + event = callback_detect_integrity_control_event(line) + if event and event['component'] == 'fim_registry_key' and event['type'] == 'integrity_clear': + return True + if event and event['component'] == 'fim_registry_value' and event['type'] == 'integrity_clear': + return True + return None + + +def callback_disk_quota_limit_reached(line): + match = re.match(CB_FILE_EXCEEDS_DISK_QUOTA, line) + + if match: + return match.group(2) + + +def callback_detect_file_added_event(line): + """ Callback that detects if a line in a log is a file added event. + + Args: + line (String): string line to be checked by callback in FileMonitor. + + Returns: + returns JSON string from log. + """ + json_event = callback_detect_event(line) + + if json_event is not None: + if json_event['data']['type'] == 'added': + return json_event + + return None + + +def callback_detect_file_modified_event(line): + """ Callback that detects if a line in a log is a file modified event. + + Args: + line (String): string line to be checked by callback in FileMonitor. + + Returns: + returns JSON string from log. + """ + json_event = callback_detect_event(line) + + if json_event is not None: + if json_event['data']['type'] == 'modified': + return json_event + + return None + + +def callback_detect_file_deleted_event(line): + """ Callback that detects if a line in a log is a file deleted event. + + Args: + line (String): string line to be checked by callback in FileMonitor. + + Returns: + returns JSON string from log. + """ + json_event = callback_detect_event(line) + + if json_event is not None: + if json_event['data']['type'] == 'deleted': + return json_event + + return None + + +def callback_audit_cannot_start(line): + """ Callback that detects if a line shows whodata engine could not start and monitoring switched to realtime. + + Args: + line (String): string line to be checked by callback in FileMonitor. + + Returns: + boolean: return True if line matches, None otherwise + """ + match = re.match(r'.*Who-data engine could not start. Switching who-data to real-time.', line) + if match: + return True + + +def callback_restricted(line): + """ Callback that detects if a line in a log if a file is ignored due to configured restrict tag. + + Returns: + string: returns path for the entry that is being ignored. + """ + match = re.match(r".*Ignoring entry '(.*?)' due to restriction '.*?'", line) + if match: + return match.group(1) + return None + + +# Event checkers +def check_fim_event(file_monitor=None, callback='', error_message=None, update_position=True, + timeout=T_60, accum_results=1, file_to_monitor=LOG_FILE_PATH): + """Check if a analysisd event occurs + + Args: + file_monitor (FileMonitor): FileMonitor object to monitor the file content. + callback (str): log regex to check in Wazuh log + error_message (str): error message to show in case of expected event does not occur + update_position (boolean): filter configuration parameter to search in Wazuh log + timeout (str): timeout to check the event in Wazuh log + accum_results (int): Accumulation of matches. + """ + file_monitor = FileMonitor(file_to_monitor) if file_monitor is None else file_monitor + error_message = f"Could not find this event in {file_to_monitor}: {callback}" if error_message is None else \ + error_message + + file_monitor.start(timeout=timeout, update_position=update_position, accum_results=accum_results, + callback=generate_monitoring_callback(callback), error_message=error_message) + + +def detect_initial_scan(file_monitor): + """Detect initial scan when restarting Wazuh. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + """ + file_monitor.start(timeout=T_60, callback=callback_detect_end_scan, + error_message=ERR_MSG_SCHEDULED_SCAN_ENDED) + + +def detect_initial_scan_start(file_monitor): + """Detect initial scan start when restarting Wazuh. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + """ + file_monitor.start(timeout=T_60, callback=callback_detect_scan_start, + error_message=ERR_MSG_SCHEDULED_SCAN_STARTED) + + +def detect_realtime_start(file_monitor): + """Detect realtime engine start when restarting Wazuh. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + """ + file_monitor.start(timeout=T_60, callback=generate_monitoring_callback(CB_FOLDERS_MONITORED_REALTIME), + error_message=ERR_MSG_FOLDERS_MONITORED_REALTIME) + + +def detect_whodata_start(file_monitor): + """Detect whodata engine start when restarting Wazuh. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + """ + file_monitor.start(timeout=T_60, callback=generate_monitoring_callback(CB_REALTIME_WHODATA_ENGINE_STARTED), + error_message=ERR_MSG_WHODATA_ENGINE_EVENT) + + +def detect_windows_sacl_configured(file_monitor, file='.*'): + """Detects when windows permision checks have been configured for a given file. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + file: The path of the file that will be monitored + """ + + pattern = fr".*win_whodata.*The SACL of '({file})' will be configured" + file_monitor.start(timeout=T_60, callback=generate_monitoring_callback(pattern), + error_message=ERR_MSG_SACL_CONFIGURED_EVENT) + + +def detect_windows_whodata_mode_change(file_monitor, file='.*'): + """Detects whe monitoring for a file changes from whodata to real-time. + + Args: + file_monitor (FileMonitor): file log monitor to detect events + file: The path of the file that will be monitored + """ + + pattern = fr".*set_whodata_mode_changes.*The '({file})' directory starts to be monitored in real-time mode." + + file_monitor.start(timeout=T_60, callback=generate_monitoring_callback(pattern), + error_message=ERR_MSG_WHODATA_REALTIME_MODE_CHANGE_EVENT) diff --git a/deps/wazuh_testing/wazuh_testing/modules/fim/utils.py b/deps/wazuh_testing/wazuh_testing/modules/fim/utils.py index e69de29bb2..47b9a5e34d 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/fim/utils.py +++ b/deps/wazuh_testing/wazuh_testing/modules/fim/utils.py @@ -0,0 +1,807 @@ +# Copyright (C) 2015-2022, Wazuh Inc. +# Created by Wazuh, Inc. . +# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +import os +import sys +import time +import platform +from datetime import datetime, timedelta +from typing import Sequence, Union, Generator, Any +from copy import deepcopy +from hashlib import sha1 + +from wazuh_testing import global_parameters, logger, REGULAR, LOG_FILE_PATH, WAZUH_PATH +from wazuh_testing.tools.file import create_file, modify_file, delete_file, generate_string +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.tools.time import TimeMachine +from wazuh_testing.modules import fim +from wazuh_testing.modules.fim import event_monitor as ev +from wazuh_testing.modules.fim.classes import CustomValidator, EventChecker + + +if sys.platform == 'win32': + import win32con + import win32api + import pywintypes + + +# Variables +_os_excluded_from_rt_wd = ['darwin', 'sunos5'] + + +# Functions +def get_sync_msgs(timeout, new_data=True): + """Look for as many synchronization events as possible. + + This function will look for the synchronization messages until a Timeout is raised or 'max_events' is reached. + + Args: + timeout (int): Timeout that will be used to get the dbsync_no_data message. + new_data (bool): Specifies if the test will wait the event `dbsync_no_data`. + + Returns: + A list with all the events in json format. + """ + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + events = [] + if new_data: + wazuh_log_monitor.start(timeout=timeout, + callback=generate_monitoring_callback(ev.CB_REGISTRY_DBSYNC_NO_DATA), + error_message='Did not receive expected ' + '"db sync no data" event') + for _ in range(0, fim.MAX_EVENTS_VALUE): + try: + sync_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=ev.callback_detect_registry_integrity_state_event, + accum_results=1, + error_message='Did not receive expected ' + 'Sending integrity control message"').result() + except TimeoutError: + break + events.append(sync_event) + return events + + +def find_value_in_event_list(key_path, value_name, event_list): + """Function that looks for a key path and value_name in a list of json events. + + Args: + path (str): Path of the registry key. + value_name (str): Name of the value. + event_list (list): List containing the events in JSON format. + + Returns: + The event that matches the specified path. None if no event was found. + """ + for event in event_list: + if 'value_name' not in event.keys(): + continue + if str(event['path']) == key_path and event['value_name'] == value_name: + return event + return None + + +def create_values_content(value_name, size): + """ Create a string of data content of a given size for a specific key value""" + return {value_name: generate_string(size, '0')} + + +def create_registry(key, subkey, arch): + """Create a registry given the key and the subkey. The registry is opened if it already exists. + + Args: + key (pyHKEY): the key of the registry (HKEY_* constants). + subkey (str): the subkey (name) of the registry. + arch (int): architecture of the registry (KEY_WOW64_32KEY or KEY_WOW64_64KEY). + + Returns: + str: the key handle of the new/opened key. + """ + + if sys.platform == 'win32': + try: + logger.info("Creating registry key " + str(os.path.join(fim.registry_class_name[key], subkey))) + + key = win32api.RegCreateKeyEx(key, subkey, win32con.KEY_ALL_ACCESS | arch) + + return key[0] # Ignore the flag that RegCreateKeyEx returns + except OSError as e: + logger.warning(f"Registry could not be created: {e}") + except pywintypes.error as e: + logger.warning(f"Registry could not be created: {e}") + + +def modify_key_perms(key, subkey, arch, user): + """ + Modify the permissions (ACL) of a registry key. + + Args: + key (pyHKEY): the key of the registry (HKEY_* constants). + subkey (str): the subkey (name) of the registry. + arch (str): architecture of the system. + user (PySID): user that is going to be used for the modification. + """ + if sys.platform == 'win32': + print_arch = '[x64]' if arch == fim.KEY_WOW64_64KEY else '[x32]' + logger.info(f"- Changing permissions of {print_arch}{os.path.join(fim.registry_class_name[key], subkey)}") + + try: + key_h = fim.RegOpenKeyEx(key, subkey, 0, fim.KEY_ALL_ACCESS | arch) + sd = win32api.RegGetKeySecurity(key_h, win32con.DACL_SECURITY_INFORMATION) + acl = sd.GetSecurityDescriptorDacl() + acl.AddAccessAllowedAce(ntc.GENERIC_ALL, user) + sd.SetDacl(True, acl, False) + + win32api.RegSetKeySecurity(key_h, win32con.DACL_SECURITY_INFORMATION, sd) + except OSError as e: + logger.warning(f"Registry permissions could not be modified: {e}") + except pywintypes.error as e: + logger.warning(f"Registry permissions could not be modified: {e}") + + +def modify_registry_key_mtime(key, subkey, arch): + """Modify the modification time of a registry key. + + Args: + key (pyHKEY): the key handle of the registry. + subkey (str): the subkey (name) of the registry. + arch (str): architecture of the system. + """ + + if sys.platform == 'win32': + print_arch = '[x64]' if arch == fim.KEY_WOW64_64KEY else '[x32]' + logger.info(f"- Changing mtime of {print_arch}{os.path.join(fim.registry_class_name[key], subkey)}") + + try: + key_h = fim.RegOpenKeyEx(key, subkey, 0, fim.KEY_ALL_ACCESS | arch) + + modify_registry_value(key_h, "dummy_value", fim.REG_SZ, "this is a dummy value") + time.sleep(2) + delete_registry_value(key_h, "dummy_value") + + fim.RegCloseKey(key_h) + key_h = fim.RegOpenKeyEx(key, subkey, 0, fim.KEY_ALL_ACCESS) + except OSError as e: + logger.warning(f"Registry mtime could not be modified: {e}") + except pywintypes.error as e: + logger.warning(f"Registry mtime could not be modified: {e}") + + +def modify_registry_owner(key, subkey, arch, user): + """Modify the owner of a registry key. + + Arch: + key (pyHKEY): the key handle of the registry. + subkey (str): the subkey (name) of the registry. + arch (str): architecture of the system. + user (pySID): identifier of the user (pySID) + + Returns: + str: key of the registry. + """ + if sys.platform == 'win32': + print_arch = '[x64]' if arch == fim.KEY_WOW64_64KEY else '[x32]' + logger.info(f"- Changing owner of {print_arch}{os.path.join(fim.registry_class_name[key], subkey)}") + + try: + key_h = fim.RegOpenKeyEx(key, subkey, 0, fim.KEY_ALL_ACCESS | arch) + desc = win32api.RegGetKeySecurity(key_h, + win32sec.DACL_SECURITY_INFORMATION | win32sec.OWNER_SECURITY_INFORMATION) + desc.SetSecurityDescriptorOwner(user, 0) + + win32api.RegSetKeySecurity(key_h, win32sec.OWNER_SECURITY_INFORMATION | win32sec.DACL_SECURITY_INFORMATION, + desc) + + return key_h + except OSError as e: + logger.warning(f"Registry owner could not be modified: {e}") + except pywintypes.error as e: + logger.warning(f"Registry owner could not be modified: {e}") + + +def modify_registry(key, subkey, arch): + """Modify a registry key. + + Args: + key (pyHKEY): the key handle of the registry. + subkey (str): the subkey (name) of the registry. + arch (str): architecture of the system. + """ + print_arch = '[x64]' if arch == fim.KEY_WOW64_64KEY else '[x32]' + logger.info(f"Modifying registry key {print_arch}{os.path.join(fim.registry_class_name[key], subkey)}") + + modify_key_perms(key, subkey, arch, win32sec.LookupAccountName(None, f"{platform.node()}\\{os.getlogin()}")[0]) + modify_registry_owner(key, subkey, arch, win32sec.LookupAccountName(None, f"{platform.node()}\\{os.getlogin()}")[0]) + modify_registry_key_mtime(key, subkey, arch) + + +def modify_registry_value(key_h, value_name, type, value): + """ + Modify the content of a registry. If the value doesn't not exists, it will be created. + + Args: + key_h (pyHKEY): the key handle of the registry. + value_name (str): the value to be set. + type (int): type of the value. + value (str): the content that will be written to the registry value. + """ + if sys.platform == 'win32': + try: + logger.info(f"Modifying value '{value_name}' of type {fim.registry_value_type[type]} and value '{value}'") + win32api.RegSetValueEx(key_h, value_name, 0, type, value) + except OSError as e: + logger.warning(f"Could not modify registry value content: {e}") + except pywintypes.error as e: + logger.warning(f"Could not modify registry value content: {e}") + + +def delete_registry(key, subkey, arch): + """Delete a registry key. + + Args: + key (pyHKEY): the key of the registry (HKEY_* constants). + subkey (str): the subkey (name) of the registry. + arch (int): architecture of the registry (KEY_WOW64_32KEY or KEY_WOW64_64KEY). + """ + if sys.platform == 'win32': + print_arch = '[x64]' if arch == fim.KEY_WOW64_64KEY else '[x32]' + logger.info(f"Removing registry key {print_arch}{str(os.path.join(fim.registry_class_name[key], subkey))}") + + try: + key_h = win32api.RegOpenKeyEx(key, subkey, 0, win32con.KEY_ALL_ACCESS | arch) + win32api.RegDeleteTree(key_h, None) + win32api.RegDeleteKeyEx(key, subkey, samDesired=arch) + except OSError as e: + logger.warning(f"Couldn't remove key {str(os.path.join(fim.registry_class_name[key], subkey))}: {e}") + except pywintypes.error as e: + logger.warning(f"Couldn't remove key {str(os.path.join(fim.registry_class_name[key], subkey))}: {e}") + + +def delete_registry_value(key_h, value_name): + """Delete a registry value from a registry key. + + Args: + key_h (pyHKEY): the key handle of the registry. + value_name (str): the value to be deleted. + """ + if sys.platform == 'win32': + logger.info(f"Removing registry value {value_name}.") + + try: + win32api.RegDeleteValue(key_h, value_name) + except OSError as e: + logger.warning(f"Couldn't remove registry value {value_name}: {e}") + except pywintypes.error as e: + logger.warning(f"Couldn't remove registry value {value_name}: {e}") + + +def calculate_registry_diff_paths(reg_key, reg_subkey, arch, value_name): + """Calculate the diff folder path of a value. + + Args: + reg_key (str): registry name (HKEY_* constants). + reg_subkey (str): path of the subkey. + arch (int): architecture of the registry. + value_name (str): name of the value. + + Returns: + tuple: diff folder path of the key and the path of the value. + """ + key_path = os.path.join(reg_key, reg_subkey) + folder_path = "{} {}".format("[x32]" if arch == fim.KEY_WOW64_32KEY else "[x64]", + sha1(key_path.encode()).hexdigest()) + diff_file = os.path.join(WAZUH_PATH, 'queue', 'diff', 'registry', folder_path, + sha1(value_name.encode()).hexdigest(), 'last-entry.gz') + return (folder_path, diff_file) + + +def transform_registry_list(value_list=['test_value'], value_type=fim.REG_SZ, callback=ev.callback_value_event): + if sys.platform == 'win32': + if value_type in [win32con.REG_SZ, win32con.REG_MULTI_SZ]: + value_default_content = '' + else: + value_default_content = 1 + + aux_dict = {} + if isinstance(value_list, list): + for elem in value_list: + aux_dict[elem] = (value_default_content, callback) + + elif isinstance(value_list, dict): + for key, elem in value_list.items(): + aux_dict[key] = (elem, callback) + + else: + raise ValueError('It can only be a list or dictionary') + + return aux_dict + + +def set_check_options(options): + """ Return set of check options. If options given is none, it will return check_all""" + options_set = fim.REQUIRED_REG_VALUE_ATTRIBUTES[fim.CHECK_ALL] + if options is not None: + options_set = options_set.intersection(options) + return options_set + + +def registry_value_create(root_key, registry_sub_key, log_monitor, arch=fim.KEY_WOW64_64KEY, value_list=['test_value'], + min_timeout=1, options=None, wait_for_scan=False, scan_delay=10, triggers_event=True, + encoding=None, callback=ev.callback_value_event, validators_after_create=None, + value_type=fim.REG_SZ): + """Check if creation of registry value events are detected by syscheck. + + This function provides multiple tools to validate events with custom validators. + + Args: + root_key (str): root key (HKEY_LOCAL_MACHINE, HKEY_LOCAL_USER, etc). + registry_sub_key (str): path of the subkey that will be created. + log_monitor (FileMonitor): file event monitor. + arch (int): Architecture of the registry key (KEY_WOW64_32KEY or KEY_WOW64_64KEY). Default `KEY_WOW64_64KEY` + value_list (list(str) or dict, optional): If it is a list, it will be transformed to a dict with empty + strings in each value. Default `['test_value']` + min_timeout (int, optional): Minimum timeout. Default `1` + options (set, optional): Set with all the checkers. Default `None` + wait_for_scan (boolean, optional): Boolean to determine if there will be time travels or not. + Default `False` + scan_delay (int, optional): time the test sleeps waiting for scan to be triggered. + triggers_event (boolean, optional): Boolean to determine if the + event should be raised or not. Default `True` + encoding (str, optional): String to determine the encoding of the registry value name. Default `None` + callback (callable, optional): Callback to use with the log monitor. Default `callback_value_event` + validators_after_create (list, optional): List of functions that validates an event triggered when a new + registry value is created. Each function must accept a param to receive the event + to be validated. Default `None` + """ + if sys.platform == 'win32': + # Transform registry list + if root_key not in fim.registry_parser: + raise ValueError("root_key not valid") + + registry_path = os.path.join(root_key, registry_sub_key) + + value_list = transform_registry_list(value_list) + if value_type in [fim.REG_SZ, fim.REG_MULTI_SZ]: + value_added_content = 'added' + else: + value_added_content = 0 + + options_set = set_check_options(options) + + custom_validator = CustomValidator(validators_after_create, None, None, None) + + registry_event_checker = RegistryEventChecker(log_monitor=log_monitor, registry_key=registry_path, + registry_dict=value_list, options=options_set, + custom_validator=custom_validator, encoding=encoding, + callback=callback, is_value=True) + + # Open the desired key + key_handle = create_registry(fim.registry_parser[root_key], registry_sub_key, arch) + + # Create registry values + for name, _ in value_list.items(): + if name in registry_path: + continue + modify_registry_value(key_handle, name, value_type, value_added_content) + + wait_for_scheduled_scan(wait_for_scan=wait_for_scan, interval=scan_delay, monitor=log_monitor) + + registry_event_checker.fetch_and_check('added', min_timeout=min_timeout, triggers_event=triggers_event) + + if triggers_event: + logger.info("'added' {} detected as expected.\n".format("events" if len(value_list) > 1 else "event")) + + +def registry_value_update(root_key, registry_sub_key, log_monitor, arch=fim.KEY_WOW64_64KEY, value_list=['test_value'], + wait_for_scan=False, scan_delay=10, min_timeout=1, options=None, triggers_event=True, + encoding=None, callback=ev.callback_value_event, validators_after_update=None, + value_type=fim.REG_SZ): + """Check if update registry value events are detected by syscheck. + + This function provides multiple tools to validate events with custom validators. + + Args: + root_key (str): root key (HKEY_LOCAL_MACHINE, HKEY_LOCAL_USER, etc). + registry_sub_key (str): path of the subkey that will be created. + log_monitor (FileMonitor): file event monitor. + arch (int): Architecture of the registry key (KEY_WOW64_32KEY or KEY_WOW64_64KEY). Default `KEY_WOW64_64KEY` + value_list (list(str) or dict, optional): If it is a list, it will be transformed to a dict with empty + strings in each value. Default `['test_value']` + wait_for_scan (boolean, optional): Boolean to determine if there will waits for scheduled scans. + Default `False` + scan_delay (int, optional): time the test sleeps waiting for scan to be triggered. + min_timeout (int, optional): Minimum timeout. Default `1` + options (set, optional): Set with all the checkers. Default `None` + triggers_event (boolean, optional): Boolean to determine if the + event should be raised or not. Default `True` + encoding (str, optional): String to determine the encoding of the registry value name. Default `None` + callback (callable, optional): Callback to use with the log monitor. Default `callback_value_event` + validators_after_update (list, optional): List of functions that validates an event triggered + when a new registry value is modified. Each function must accept a param to receive the event + to be validated. Default `None` + """ + if sys.platform == 'win32': + # Transform registry list + if root_key not in fim.registry_parser: + raise ValueError("root_key not valid") + + registry_path = os.path.join(root_key, registry_sub_key) + + value_list = transform_registry_list(value_list=value_list, value_type=value_type, callback=callback) + + options_set = set_check_options(options) + + custom_validator = CustomValidator(None, validators_after_update, None, None) + + registry_event_checker = RegistryEventChecker(log_monitor=log_monitor, registry_key=registry_path, + registry_dict=value_list, options=options_set, + custom_validator=custom_validator, encoding=encoding, + callback=callback, is_value=True) + + key_handle = create_registry(fim.registry_parser[root_key], registry_sub_key, arch) + + # Modify previous registry values + for name, content in value_list.items(): + if name in registry_path: + continue + + modify_registry_value(key_handle, name, value_type, content[0]) + + wait_for_scheduled_scan(wait_for_scan=wait_for_scan, interval=scan_delay, monitor=log_monitor) + registry_event_checker.fetch_and_check('modified', min_timeout=min_timeout, triggers_event=triggers_event) + + if triggers_event: + logger.info("'modified' {} detected as expected.\n".format("events" if len(value_list) > 1 else "event")) + + +def registry_value_delete(root_key, registry_sub_key, log_monitor, arch=fim.KEY_WOW64_64KEY, value_list=['test_value'], + wait_for_scan=False, scan_delay=10, min_timeout=1, options=None, triggers_event=True, + encoding=None, callback=ev.callback_value_event, validators_after_delete=None, + value_type=fim.REG_SZ): + """Check if delete registry value events are detected by syscheck. + + This function provides multiple tools to validate events with custom validators. + + Args: + root_key (str): root key (HKEY_LOCAL_MACHINE, HKEY_LOCAL_USER, etc). + registry_sub_key (str): path of the subkey that will be created. + log_monitor (FileMonitor): file event monitor. + arch (int): Architecture of the registry key (KEY_WOW64_32KEY or KEY_WOW64_64KEY). Default `KEY_WOW64_64KEY` + value_list (list(str) or dict, optional): If it is a list, it will be transformed to a dict with empty + strings in each value. Default `['test_value']` + wait_for_scan (boolean, optional): Boolean to determine if there will waits for scheduled scans. + Default `False` + scan_delay (int, optional): time the test sleeps waiting for scan to be triggered. + min_timeout (int, optional): Minimum timeout. Default `1` + options (set, optional): Set with all the checkers. Default `None` + triggers_event (boolean, optional): Boolean to determine if the event should be raised or not. Default `True` + encoding (str, optional): String to determine the encoding of the registry value name. Default `None` + callback (callable, optional): Callback to use with the log monitor. Default `callback_value_event` + validators_after_delete (list, optional): List of functions that validates an event triggered + when a new registry value is deleted. Each function must accept a param to receive the event + to be validated. Default `None` + """ + if sys.platform == 'win32': + # Transform registry list + if root_key not in fim.registry_parser: + raise ValueError("root_key not valid") + + registry_path = os.path.join(root_key, registry_sub_key) + + value_list = transform_registry_list(value_list=value_list, value_type=value_type, callback=callback) + + options_set = set_check_options(options) + + custom_validator = CustomValidator(None, None, validators_after_delete, None) + + registry_event_checker = RegistryEventChecker(log_monitor=log_monitor, registry_key=registry_path, + registry_dict=value_list, options=options_set, + custom_validator=custom_validator, encoding=encoding, + callback=callback, is_value=True) + + key_handle = create_registry(fim.registry_parser[root_key], registry_sub_key, arch) + + # Delete previous registry values + for name, _ in value_list.items(): + if name in registry_path: + continue + delete_registry_value(key_handle, name) + + wait_for_scheduled_scan(wait_for_scan=wait_for_scan, interval=scan_delay, monitor=log_monitor) + registry_event_checker.fetch_and_check('deleted', min_timeout=min_timeout, triggers_event=triggers_event) + + if triggers_event: + logger.info("'deleted' {} detected as expected.\n".format("events" if len(value_list) > 1 else "event")) + + +# Old Configuration framework +def generate_params(extra_params: dict = None, apply_to_all: Union[Sequence[Any], Generator[dict, None, None]] = None, + modes: list = None): + """ + Expand params and metadata with optional FIM modes . + + extra_params = {'WILDCARD': {'attribute': ['list', 'of', 'values']}} - Max. 3 elements in the list of values + or + {'WILDCARD': {'attribute': 'value'}} - It will have the same value for scheduled, realtime and + whodata + or + {'WILDCARD': 'value'} - Valid when param is not an attribute. (ex: 'MODULE_NAME': __name__) + or + {'WILDCARD': ['list', 'of', 'values']} - Same as above with multiple values. The length of the list + must be the same as the length of the mode list. + + + apply_to_all = Same structure as above. The difference is, these params will be applied for every existing + configuration. They are applied after the `extra_params`. + + Examples: + >>> generate_params(extra_params={'REPORT_CHANGES': {'report_changes': ['yes', 'no']}, 'MODULE_NAME': 'name'}, + ... modes=['realtime', 'whodata']) + ([{'FIM_MODE': {'realtime': 'yes'}, 'REPORT_CHANGES': {'report_changes': 'yes'}, 'MODULE_NAME': 'name'}, + {'FIM_MODE': {'whodata': 'yes'}, 'REPORT_CHANGES': {'report_changes': 'no'}, 'MODULE_NAME': 'name'}], + [{'fim_mode': 'realtime', 'report_changes': 'yes', 'module_name': 'name'}, + {'fim_mode': 'whodata', 'report_changes': 'no', 'module_name': 'name'}]) + + >>> generate_params(extra_params={'MODULE_NAME': 'name'}, apply_to_all={'FREQUENCY': {'frequency': [1, 2]}}, + ... modes=['scheduled', 'realtime']) + ([{'FIM_MODE': '', 'MODULE_NAME': 'name', 'FREQUENCY': {'frequency': 1}}, + {'FIM_MODE': {'realtime': 'yes'}, 'MODULE_NAME': 'name', 'FREQUENCY': {'frequency': 1}}, + {'FIM_MODE': '', 'MODULE_NAME': 'name', 'FREQUENCY': {'frequency': 2}}, + {'FIM_MODE': {'realtime': 'yes'}, 'MODULE_NAME': 'name', 'FREQUENCY': {'frequency': 2}}], + [{'fim_mode': 'scheduled', 'module_name': 'name', 'frequency': {'frequency': 1}}, + {'fim_mode': 'realtime', 'module_name': 'name', 'frequency': {'frequency': 1}}, + {'fim_mode': 'scheduled', 'module_name': 'name', 'frequency': {'frequency': 2}}, + {'fim_mode': 'realtime', 'module_name': 'name', 'frequency': {'frequency': 2}}]) + + >>> generate_params(extra_params={'LIST_OF_VALUES': {'list': [[1,2,3]]}, 'MODULE_NAME': 'name'}, + ... modes=['scheduled']) + ([{'FIM_MODE': '', 'LIST_OF_VALUES': {'list': [1, 2, 3]}, 'MODULE_NAME': 'name'}], + [{'fim_mode': 'scheduled', 'list_of_values': [1, 2, 3], 'module_name': 'name'}]) + + Args: + extra_params (dict, optional): Dictionary with all the extra parameters to add for every mode. Default `None` + apply_to_all (iterable object or generator object): dictionary with all the extra parameters to add to + every configuration. Default `None` + modes (list, optional): FIM modes to be applied. Default `None` (scheduled, realtime and whodata) + + Returns: + tuple (list, list): Tuple with the list of parameters and the list of metadata. + """ + + def transform_param(mutable_object: dict): + """Transform `mutable_object` into a valid data structure.""" + for k, v in mutable_object.items(): + if isinstance(v, dict): + for v_key, v_value in v.items(): + mutable_object[k][v_key] = v_value if isinstance(v_value, list) else [v_value] * len(modes) + elif not isinstance(v, list): + mutable_object[k] = [v] * len(modes) + + fim_param = [] + fim_metadata = [] + + # Get FIM params and metadata + modes = modes if modes else ['scheduled', 'realtime', 'whodata'] + for mode in modes: + param, metadata = get_fim_mode_param(mode) + if param: + fim_param.append(param) + fim_metadata.append(metadata) + + # If we have extra_params to add, assert they have the exact number of elements as modes + # Also, if there aren't extra_params, let `add` to False to at least put `FIM_MODES` + add = False + if extra_params: + transform_param(extra_params) + for _, value in extra_params.items(): + if isinstance(value, dict): + assert len(next(iter(value.values()))) == len(modes), 'Length not equal between extra_params values ' \ + 'and modes' + else: + assert len(value) == len(modes), 'Length not equal between extra_params values and modes' + add = True + + params = [] + metadata = [] + + # Iterate over fim_mode params and metadata and add one configuration for every existing fim_mode + for i, (fim_mode_param, fim_mode_meta) in enumerate(zip(fim_param, fim_metadata)): + p_aux: dict = deepcopy(fim_mode_param) + m_aux: dict = deepcopy(fim_mode_meta) + if add: + for key, value in extra_params.items(): + p_aux[key] = {k: v[i] for k, v in value.items()} if isinstance(value, dict) else \ + value[i] if isinstance(value, list) else value + m_aux[key.lower()] = next(iter(value.values()))[i] if isinstance(value, dict) else \ + value[i] if isinstance(value, list) else value + params.append(p_aux) + metadata.append(m_aux) + + # Append new parameters and metadata for every existing configuration + if apply_to_all: + aux_params = deepcopy(params) + aux_metadata = deepcopy(metadata) + params.clear() + metadata.clear() + for element in apply_to_all: + for p_dict, m_dict in zip(aux_params, aux_metadata): + params.append({**p_dict, **element}) + metadata.append({**m_dict, **{wildcard.lower(): value for wildcard, value in element.items()}}) + + return params, metadata + + +def get_fim_mode_param(mode, key='FIM_MODE'): + """Get the parameters for the FIM mode. + + This is useful to generate the directories tag with several fim modes. It also + takes into account the current platform so realtime and whodata does not apply + to darwin. + + Args: + mode (string): Must be one of the following 'scheduled', 'realtime' or 'whodata' + key (string, optional): Name of the placeholder expected in the target configuration. Default 'FIM_MODE' + + Returns: + tuple (dict, dict): + Params: The key is `key` and the value is the string to be replaced in the target configuration. + Metadata: The key is `key` in lowercase and the value is always `mode`. + """ + + if mode not in global_parameters.fim_mode: + return None, None + + metadata = {key.lower(): mode} + if mode == 'scheduled': + return {key: ''}, metadata + elif mode == 'realtime' and sys.platform not in _os_excluded_from_rt_wd: + return {key: {'realtime': 'yes'}}, metadata + elif mode == 'whodata' and sys.platform not in _os_excluded_from_rt_wd: + return {key: {'whodata': 'yes'}}, metadata + else: + return None, None + + +def regular_file_cud(folder, log_monitor, file_list=['testfile0'], min_timeout=1, options=None, + triggers_event=True, encoding=None, validators_after_create=None, validators_after_update=None, + validators_after_delete=None, validators_after_cud=None, event_mode=None, escaped=False): + """Check if creation, update and delete events are detected by syscheck. + + This function provides multiple tools to validate events with custom validators. + + Args: + folder (str): Path where the files will be created. + log_monitor (FileMonitor): File event monitor. + file_list (list(str) or dict, optional): If it is a list, it will be transformed to a dict with + empty strings in each value. Default `['testfile0']` + min_timeout (int, optional): Minimum timeout. Default `1` + options (set, optional): Set with all the checkers. Default `None` + triggers_event (boolean, optional): Boolean to determine if the event should be raised or not. Default `True` + encoding (str, optional): String to determine the encoding of the file name. Default `None` + validators_after_create (list, optional): List of functions that validates an event triggered when a new file + is created. Each function must accept a param to receive the event to be validated. Default `None` + validators_after_update (list, optional): List of functions that validates an event triggered when a new file + is modified. Each function must accept a param to receive the event to be validated. Default `None` + validators_after_delete (list, optional): List of functions that validates an event triggered when a new file + is deleted. Each function must accept a param to receive the event to be validated. Default `None` + validators_after_cud (list, optional): List of functions that validates an event triggered when a new file + is created, modified or deleted. Each function must accept a param to receive + the event to be validated. Default `None` + event_mode (str, optional): Specifies the FIM scan mode to check in the events + """ + + # Transform file list + if not isinstance(file_list, list) and not isinstance(file_list, dict): + raise ValueError('Value error. It can only be list or dict') + elif isinstance(file_list, list): + file_list = {i: '' for i in file_list} + + custom_validator = CustomValidator(validators_after_create, validators_after_update, + validators_after_delete, validators_after_cud) + event_checker = EventChecker(log_monitor=log_monitor, folder=folder, file_list=file_list, options=options, + custom_validator=custom_validator, encoding=encoding, + callback=ev.callback_detect_file_added_event) + + # Create text files + for name, content in file_list.items(): + create_file(REGULAR, folder, name, content=content) + + event_checker.fetch_and_check('added', min_timeout=min_timeout, triggers_event=triggers_event, + event_mode=event_mode, escaped=escaped) + if triggers_event: + logger.info("'added' {} detected as expected.\n".format("events" if len(file_list) > 1 else "event")) + + # Modify previous text files + for name, content in file_list.items(): + modify_file(folder, name, is_binary=isinstance(content, bytes)) + + event_checker = EventChecker(log_monitor=log_monitor, folder=folder, file_list=file_list, options=options, + custom_validator=custom_validator, encoding=encoding, + callback=ev.callback_detect_file_modified_event) + event_checker.fetch_and_check('modified', min_timeout=min_timeout, triggers_event=triggers_event, + event_mode=event_mode, escaped=escaped) + if triggers_event: + logger.info("'modified' {} detected as expected.\n".format("events" if len(file_list) > 1 else "event")) + + # Delete previous text files + for name in file_list: + delete_file(os.path.join(folder, name)) + + event_checker = EventChecker(log_monitor=log_monitor, folder=folder, file_list=file_list, options=options, + custom_validator=custom_validator, encoding=encoding, + callback=ev.callback_detect_file_deleted_event) + event_checker.fetch_and_check('deleted', min_timeout=min_timeout, triggers_event=triggers_event, + event_mode=event_mode, escaped=escaped) + if triggers_event: + logger.info("'deleted' {} detected as expected.\n".format("events" if len(file_list) > 1 else "event")) + + +def check_time_travel(time_travel: bool, interval: timedelta = timedelta(hours=13), monitor: FileMonitor = None, + timeout=global_parameters.default_timeout): + """Checks if the conditions for changing the current time and date are met and call to the specific function + depending on those conditions. + + Optionally, a monitor may be used to check if a scheduled scan has been performed. + + This function is specially useful to deal with scheduled scans that are triggered on a time interval basis. + + Args: + time_travel (boolean): True if we need to update time. False otherwise. + interval (timedelta, optional): time interval that will be added to system clock. Default: 13 hours. + monitor (FileMonitor, optional): if passed, after changing system clock it will check for the end of the + scheduled scan. The `monitor` will not consume any log line. Default `None`. + timeout (int, optional): If a monitor is provided, this parameter sets how log to wait for the end of scan. + Raises + TimeoutError: if `monitor` is not `None` and the scan has not ended in the + default timeout specified in `global_parameters`. + """ + + if 'fim_mode' in global_parameters.current_configuration['metadata'].keys(): + mode = global_parameters.current_configuration['metadata']['fim_mode'] + if mode != 'scheduled' or mode not in global_parameters.fim_mode: + return + + if time_travel: + before = str(datetime.now()) + TimeMachine.travel_to_future(interval) + logger.info(f"Changing the system clock from {before} to {str(datetime.now())}") + + if monitor: + monitor.start(timeout=timeout, callback=ev.callback_detect_end_scan, + update_position=False, + error_message=f"End of scheduled scan not detected after {timeout} seconds") + + +def wait_for_scheduled_scan(wait_for_scan=False, interval: timedelta = timedelta(seconds=20), + monitor: FileMonitor = None, timeout=global_parameters.default_timeout): + """Checks if the conditions for waiting for a new scheduled scan. + + Optionally, a monitor may be used to check if a scheduled scan has been performed. + + This function is specially useful to deal with scheduled scans that are triggered on a time interval basis. + + Args: + wait_scan (boolean): True if we need to update time. False otherwise. + interval (timedelta, optional): time interval that will be waited for the scheduled scan to start. + Default: 20 seconds. + monitor (FileMonitor, optional): if passed, after changing system clock it will check for the end of the + scheduled scan. The `monitor` will not consume any log line. Default `None`. + timeout (int, optional): If a monitor is provided, this parameter sets how long to wait for the end of scan. + Raises + TimeoutError: if `monitor` is not `None` and the scan has not ended in the + default timeout specified in `global_parameters`. + """ + + if 'fim_mode' in global_parameters.current_configuration['metadata'].keys(): + mode = global_parameters.current_configuration['metadata']['fim_mode'] + if mode != 'scheduled' or mode not in global_parameters.fim_mode: + return + + if wait_for_scan: + logger.info(f"waiting for scheduled scan to start for {interval} seconds") + time.sleep(interval) + if monitor: + monitor.start(timeout=timeout, callback=ev.callback_detect_end_scan, + update_position=False, + error_message=f"End of scheduled scan not detected after {timeout} seconds") diff --git a/deps/wazuh_testing/wazuh_testing/modules/logcollector/__init__.py b/deps/wazuh_testing/wazuh_testing/modules/logcollector/__init__.py index 5697cf620c..356620f0a8 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/logcollector/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/modules/logcollector/__init__.py @@ -1,5 +1,27 @@ +import sys +from wazuh_testing.tools.monitoring import LOG_COLLECTOR_DETECTOR_PREFIX, AGENT_DETECTOR_PREFIX -LOG_COLLECTOR_PREFIX = r'.*wazuh-logcollector.*' + +# Variables +LOG_COLLECTOR_PREFIX = AGENT_DETECTOR_PREFIX if sys.platform == 'win32' else LOG_COLLECTOR_DETECTOR_PREFIX WINDOWS_AGENT_PREFIX = r'.*wazuh-agent.*' MAILD_PREFIX = r'.*wazuh-maild.*' + + +# Error Messages GENERIC_CALLBACK_ERROR_COMMAND_MONITORING = 'The expected command monitoring log has not been produced' +ERR_MSG_UNEXPECTED_IGNORE_EVENT = "Found unexpected 'Ignoring the log due to ignore/restrict config' event" + + +# Local_internal_options +if sys.platform == 'win32': + LOGCOLLECTOR_DEFAULT_LOCAL_INTERNAL_OPTIONS = { + 'windows.debug': '2', + 'agent.debug': '2' + } +else: + LOGCOLLECTOR_DEFAULT_LOCAL_INTERNAL_OPTIONS = { + 'logcollector.debug': '2', + 'monitord.rotate_log': '0', + 'agent.debug': '0', + } diff --git a/deps/wazuh_testing/wazuh_testing/modules/logcollector/event_monitor.py b/deps/wazuh_testing/wazuh_testing/modules/logcollector/event_monitor.py index 1cbbc9e6c6..2bb0b87fc8 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/logcollector/event_monitor.py +++ b/deps/wazuh_testing/wazuh_testing/modules/logcollector/event_monitor.py @@ -1,9 +1,9 @@ import re - -from wazuh_testing import T_30 -from wazuh_testing.modules.logcollector import LOG_COLLECTOR_PREFIX +import sys +import pytest +from wazuh_testing import T_30, T_10, LOG_FILE_PATH +from wazuh_testing.modules.logcollector import LOG_COLLECTOR_PREFIX, ERR_MSG_UNEXPECTED_IGNORE_EVENT from wazuh_testing.tools.monitoring import FileMonitor -from wazuh_testing import LOG_FILE_PATH def make_logcollector_callback(pattern, prefix=LOG_COLLECTOR_PREFIX, escape=False): @@ -50,11 +50,13 @@ def check_logcollector_event(file_monitor=None, callback='', error_message=None, error_message = f"Could not find this event in {file_to_monitor}: {callback}" if error_message is None else \ error_message - file_monitor.start(timeout=timeout, update_position=update_position, accum_results=accum_results, - callback=make_logcollector_callback(callback, prefix, escape), error_message=error_message) + result = file_monitor.start(timeout=timeout, update_position=update_position, accum_results=accum_results, + callback=make_logcollector_callback(callback, prefix, escape), + error_message=error_message).result() + return result -def check_analyzing_file(file, error_message, prefix, file_monitor=None): +def check_analyzing_file(file, prefix, error_message=None, file_monitor=None): """Create a callback to detect if logcollector is monitoring a file. Args: @@ -63,13 +65,17 @@ def check_analyzing_file(file, error_message, prefix, file_monitor=None): prefix (str): Daemon that generates the error log. file_monitor (FileMonitor): Log monitor. """ + if error_message is None: + error_message = f"Did not receive the expected 'Analyzing file: {file}' event" + check_logcollector_event(file_monitor=file_monitor, timeout=T_30, - callback=fr".*Analyzing file: '{re.escape(file)}'.*", + callback=fr".*Analyzing file: '{file}'.*", error_message=error_message, prefix=prefix) -def check_syslog_messages(message, error_message, prefix, file_monitor=None, timeout=T_30, escape=False): +def check_syslog_message(message, prefix, error_message=None, file_monitor=None, timeout=T_30, escape=False): """Create a callback to detect "DEBUG: Read lines from command " debug line. + Args: message (str): Command to be monitored. error_message (str): Error message. @@ -78,6 +84,51 @@ def check_syslog_messages(message, error_message, prefix, file_monitor=None, tim timeout (int): Timeout to check the log. escape (bool): Flag to escape special characters in the pattern. """ - callback_msg = fr"DEBUG: Reading syslog message: '{message}'" + if error_message is None: + error_message = f"Did not receive the expected 'Reading syslog message: {message}' event" + + callback_msg = fr".*DEBUG: Reading syslog message: '{message}'.*" + check_logcollector_event(file_monitor=file_monitor, timeout=timeout, callback=callback_msg, error_message=error_message, prefix=prefix, escape=escape) + + +def check_ignore_restrict_message(message, regex, tag, prefix, error_message=None, file_monitor=None, timeout=T_10, + escape=False): + """Create a callback to detect "DEBUG: Ignoring the log ... due to config" debug line. + + Args: + message (str): Command to be monitored. + regex (str): regex pattern configured to ignore or restrict to. + tag (str): string with the configured tag. Values: 'ignore' or 'restrict' + error_message (str): Error message. + prefix (str): Daemon that generates the error log. + file_monitor (FileMonitor): Log monitor. + timeout (int): Timeout to check the log. + escape (bool): Flag to escape special characters in the pattern. + + Returns: True if the expected message has been found, False otherwise. + """ + if error_message is None: + error_message = f"Did not receive the expected 'Ignoring the log line: {message} due to {tag} config' event" + + callback_msg = fr"Ignoring the log line '{message}' due to {tag} config: '{regex}'" + + return check_logcollector_event(file_monitor=file_monitor, timeout=timeout, callback=callback_msg, + error_message=error_message, prefix=prefix, escape=escape) + + +def check_ignore_restrict_message_not_found(message, regex, tag, prefix): + """Check that an unexpected "Ignoring the log line..." event does not appear and a log is not ignored when it + does not match the regex. + + Args: + message (str): Message to be monitored. + regex (str): regex pattern configured to ignore or restrict to. + tag (str): string with the configured tag. Values: 'ignore' or 'restrict' + prefix (str): Daemon that generates the error log. + """ + log_found = False + with pytest.raises(TimeoutError): + log_found = check_ignore_restrict_message(message=message, regex=regex, tag=tag, prefix=prefix) + assert log_found is False, ERR_MSG_UNEXPECTED_IGNORE_EVENT diff --git a/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/__init__.py b/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/__init__.py index 9cadb8d85b..328eccd6a0 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/__init__.py @@ -19,6 +19,8 @@ T_800 = 800 CUSTOM_VULNERABLE_PACKAGES = 'custom_vulnerable_packages.json' +CUSTOM_VULNERABLE_PKG_EMPTY_VENDOR = 'custom_vulnerable_pkg_empty_vendor.json' +CUSTOM_VULNERABLE_PKG_EMPTY_VENDOR_VERSION = 'custom_vulnerable_pkg_empty_vendor_version.json' CUSTOM_NVD_FEED = 'custom_nvd_feed.json' CUSTOM_NVD_ALTERNATIVE_FEED = 'custom_nvd_alternative_feed.json' CUSTOM_REDHAT_JSON_FEED = 'custom_redhat_json_feed.json' @@ -28,10 +30,12 @@ CUSTOM_DEBIAN_JSON_FEED = 'custom_debian_json_feed.json' CUSTOM_MSU_JSON_FEED = 'custom_msu.json' CUSTOM_CPE_HELPER = 'custom_cpe_helper.json' +CUSTOM_GENERIC_CPE_HELPER = 'custom_generic_cpe_helper_one_package.json' CUSTOM_CPE_HELPER_TEMPLATE = 'custom_cpe_helper_template.json' CUSTOM_ARCHLINUX_JSON_FEED = 'custom_archlinux_feed.json' CUSTOM_ALAS_JSON_FEED = 'custom_alas_feed.json' CUSTOM_ALAS2_JSON_FEED = 'custom_alas2_feed.json' +CUSTOM_ALAS_2022_JSON_FEED = 'custom_alas_2022_feed.json' CUSTOM_SUSE_OVAL_FEED = 'custom_suse_oval_feed.xml' VULNERABILITY_DETECTOR_PREFIX = r'.*wazuh-modulesd:vulnerability-detector.*' @@ -157,7 +161,7 @@ def insert_suse_system_package(agent_id='000', version='SLES15'): ValueError: If version parameter has an invalid value. """ if version not in SUSE_SYSTEM_PACKAGE: - raise ValueError(f"Suse system parameter invalid.") + raise ValueError('Suse system parameter invalid.') for package in SUSE_SYSTEM_PACKAGE[version]: agent_db.insert_package(name=package['name'], version=package['version'], source=package['name'], diff --git a/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/event_monitor.py b/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/event_monitor.py index b918915a99..26ae9a5439 100644 --- a/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/event_monitor.py +++ b/deps/wazuh_testing/wazuh_testing/modules/vulnerability_detector/event_monitor.py @@ -510,3 +510,16 @@ def check_error_inserting_package(log_monitor=None, agent_id='000', timeout=vd.T check_vuln_detector_event(file_monitor=log_monitor, timeout=timeout, callback=f"ERROR: .* Could not insert the CPEs from the agent '{agent_id}' " "into the database.") + + +def check_version_log(package_name='', log_monitor=None, timeout=vd.T_20): + """Check that the version log could not be reached. + + Args: + package_name (str): Package name. + log_monitor (FileMonitor): Log monitor. + timeout (str): timeout to check the event in Wazuh log. + """ + check_vuln_detector_event(file_monitor=log_monitor, timeout=timeout, + callback=fr"DEBUG: .* Couldn't get the version of the CPE for the {package_name} " + "package.") diff --git a/deps/wazuh_testing/wazuh_testing/qa_docs/schema.yaml b/deps/wazuh_testing/wazuh_testing/qa_docs/schema.yaml index 88c2d4e3ea..3bf79b0f40 100644 --- a/deps/wazuh_testing/wazuh_testing/qa_docs/schema.yaml +++ b/deps/wazuh_testing/wazuh_testing/qa_docs/schema.yaml @@ -182,6 +182,7 @@ predefined_values: - 4.2.0 - 4.3.0 - 4.4.0 + - 4.5.0 tags: - active_response - agentd diff --git a/deps/wazuh_testing/wazuh_testing/remote.py b/deps/wazuh_testing/wazuh_testing/remote.py index 779e5be4bf..ee4bb6b54d 100644 --- a/deps/wazuh_testing/wazuh_testing/remote.py +++ b/deps/wazuh_testing/wazuh_testing/remote.py @@ -499,7 +499,7 @@ def wait_to_remoted_update_groups(wazuh_log_monitor): # The log is truncated to ensure that the information has been loaded after the agent has been registered. truncate_file(LOG_FILE_PATH) - callback_pattern = '.*c_files().*End updating shared files sums.' + callback_pattern = '.*c_files().*End updating shared files.' error_message = 'Could not find the groups reload log' check_remoted_log_event(wazuh_log_monitor, callback_pattern, error_message, timeout=SYNC_FILES_TIMEOUT) @@ -684,7 +684,7 @@ def keep_alive_until_group_configuration_sent(sender, interval=1, timeout=20): args=(sender,)) keep_alive_agent.start() - log_callback = make_callback(pattern=".*End sending file '.+' to agent '\d+'\.", prefix='.*wazuh-remoted.*') + log_callback = make_callback(pattern=r".*End sending file '.+' to agent '\d+'\.", prefix=r'.*wazuh-remoted.*') log_monitor = FileMonitor(LOG_FILE_PATH) log_monitor.start(timeout=REMOTED_GLOBAL_TIMEOUT, callback=log_callback, error_message="New shared configuration was not sent") diff --git a/deps/wazuh_testing/wazuh_testing/scripts/simulate_agents.py b/deps/wazuh_testing/wazuh_testing/scripts/simulate_agents.py index 7990097557..846b25405e 100644 --- a/deps/wazuh_testing/wazuh_testing/scripts/simulate_agents.py +++ b/deps/wazuh_testing/wazuh_testing/scripts/simulate_agents.py @@ -48,6 +48,9 @@ def process_script_parameters(args): args.modules.append('receive_messages') args.modules_eps.append('0') + if not args.version.startswith('v'): + args.version = 'v' + args.version + def set_agent_modules_and_eps(agent, active_modules, modules_eps): """Set active modules and EPS to an agent. diff --git a/deps/wazuh_testing/wazuh_testing/tools/__init__.py b/deps/wazuh_testing/wazuh_testing/tools/__init__.py index 60fa272e99..1a0d4727be 100644 --- a/deps/wazuh_testing/wazuh_testing/tools/__init__.py +++ b/deps/wazuh_testing/wazuh_testing/tools/__init__.py @@ -34,6 +34,7 @@ WAZUH_UNIX_USER = 'wazuh' WAZUH_UNIX_GROUP = 'wazuh' GLOBAL_DB_PATH = os.path.join(WAZUH_PATH, 'queue', 'db', 'global.db') + ARCHIVES_LOG_FILE_PATH = os.path.join(WAZUH_PATH, 'logs', 'archives', 'archives.log') ACTIVE_RESPONSE_BINARY_PATH = os.path.join(WAZUH_PATH, 'active-response', 'bin') else: WAZUH_SOURCES = os.path.join('/', 'wazuh') diff --git a/deps/wazuh_testing/wazuh_testing/tools/agent_simulator.py b/deps/wazuh_testing/wazuh_testing/tools/agent_simulator.py index feb1e2c267..316061e1cd 100644 --- a/deps/wazuh_testing/wazuh_testing/tools/agent_simulator.py +++ b/deps/wazuh_testing/wazuh_testing/tools/agent_simulator.py @@ -462,6 +462,8 @@ def process_message(self, sender, message): kind, checksum, name = msg_decoded_list[1:4] if kind == 'file' and "merged.mg" in name: self.update_checksum(checksum) + elif '#!-force_reconnect' in msg_decoded_list[0]: + sender.reconnect(self.startup_msg) def process_command(self, sender, message_list): """Process agent received commands through the socket. @@ -1441,12 +1443,24 @@ def __init__(self, manager_address, manager_port='1514', protocol=TCP): self.manager_address = manager_address self.manager_port = manager_port self.protocol = protocol.upper() + self.socket = None + self.connect() + + def connect(self): if is_tcp(self.protocol): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.connect((self.manager_address, int(self.manager_port))) if is_udp(self.protocol): self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + def reconnect(self, event): + if is_tcp(self.protocol): + self.socket.shutdown(socket.SHUT_RDWR) + self.socket.close() + self.connect() + if event: + self.send_event(event) + def send_event(self, event): if is_tcp(self.protocol): length = pack(' shutil.rmtree(path, onerror=on_write_error) + """ + import stat + # Check if the error is an access error for Write permissions. + if not os.access(path, os.W_OK): + # Add write permissions so file can be edited and execute function. + os.chmod(path, 0o0777) + function(path) + # If error is not Write access error, raise the error + else: + raise def download_file(source_url, dest_path): @@ -242,6 +286,25 @@ def remove_file(file_path): delete_path_recursively(file_path) +def modify_all_files_in_folder(folder_path, data): + """Write data into all files in a folder + Args: + file_path (str): File or directory path to modify. + data (str): what to write into the file. + """ + for file in os.listdir(folder_path): + write_file(os.path.join(folder_path, file), data) + + +def delete_all_files_in_folder(folder_path): + """ Remove al files inside a folder + Args: + file_path (str): File or directory path to remove. + """ + for file in os.listdir(folder_path): + os.remove(os.path.join(folder_path, file)) + + def validate_json_file(file_path): try: with open(file_path) as file: @@ -541,3 +604,300 @@ def replace_regex_in_file(search_regex, replace_regex, file_path): # Write the file data write_file(file_path, file_data) + + +def _create_fifo(path, name): + """Create a FIFO file. + + Args: + path (str): path where the file will be created. + name (str): file name. + + Raises: + OSError: if `mkfifo` fails. + """ + fifo_path = os.path.join(path, name) + try: + os.mkfifo(fifo_path) + except OSError: + raise + + +def _create_sym_link(path, name, target): + """Create a symbolic link. + + Args: + path (str): path where the symbolic link will be created. + name (str): file name. + target (str): path where the symbolic link will be pointing to. + + Raises: + OSError: if `symlink` fails. + """ + symlink_path = os.path.join(path, name) + try: + os.symlink(target, symlink_path) + except OSError: + raise + + +def _create_hard_link(path, name, target): + """Create a hard link. + + Args: + path (str): path where the hard link will be created. + name (str): file name. + target (str): path where the hard link will be pointing to. + + Raises: + OSError: if `link` fails. + """ + link_path = os.path.join(path, name) + try: + os.link(target, link_path) + except OSError: + raise + + +def _create_socket(path, name): + """Create a Socket file. + + Args: + path (str): path where the socket will be created. + name (str): file name. + + Raises: + OSError: if `unlink` fails. + """ + socket_path = os.path.join(path, name) + try: + os.unlink(socket_path) + except OSError: + if os.path.exists(socket_path): + raise + sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) + sock.bind(socket_path) + + +def _create_regular(path, name, content=''): + """Create a regular file. + + Args: + path (str): path where the regular file will be created. + name (str): file name. + content (str, optional): content of the created file. Default `''` + """ + regular_path = os.path.join(path, name) + mode = 'wb' if isinstance(content, bytes) else 'w' + + with open(regular_path, mode) as f: + f.write(content) + + +def _create_regular_windows(path, name, content=''): + """Create a regular file in Windows + + Args: + path (str): path where the regular file will be created. + name (str): file name. + content (str, optional): content of the created file. Default `''` + """ + regular_path = os.path.join(path, name) + os.popen("echo " + content + " > " + regular_path + f" runas /user:{os.getlogin()}") + + +def create_file(type_, path, name, **kwargs): + """Create a file in a given path. The path will be created in case it does not exists. + + Args: + type_ (str): defined constant that specifies the type. It can be: FIFO, SYSLINK, Socket or REGULAR. + path (str): path where the file will be created. + name (str): file name. + **kwargs: Arbitrary keyword arguments. + + Keyword Args: + **content (str): content of the created regular file. + **target (str): path where the link will be pointing to. + + Raises: + ValueError: if `target` is missing for SYMLINK or HARDINK. + """ + + try: + logger.info("Creating file " + str(os.path.join(path, name)) + " of " + str(type_) + " type") + os.makedirs(path, exist_ok=True, mode=0o777) + if type_ != REGULAR: + try: + kwargs.pop('content') + except KeyError: + pass + if type_ in (SYMLINK, HARDLINK) and 'target' not in kwargs: + raise ValueError(f"'target' param is mandatory for type {type_}") + getattr(sys.modules[__name__], f'_create_{type_}')(path, name, **kwargs) + except OSError: + logger.info("File could not be created.") + pytest.skip("OS does not allow creating this file.") + + +def modify_file_content(path, name, new_content=None, is_binary=False): + """Modify the content of a file. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + new_content (str, optional): new content to append to the file. Previous content will remain. Defaults `None` + is_binary (boolean, optional): True if the file's content is in binary format. False otherwise. Defaults `False` + """ + path_to_file = os.path.join(path, name) + logger.info("- Changing content of " + str(path_to_file)) + content = "1234567890qwertyu" if new_content is None else new_content + with open(path_to_file, 'ab' if is_binary else 'a') as f: + f.write(content.encode() if is_binary else content) + + +def modify_file_mtime(path, name): + """Change the modification time of a file. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + path_to_file = os.path.join(path, name) + logger.info("- Changing mtime of " + str(path_to_file)) + stat = os.stat(path_to_file) + access_time = stat[ST_ATIME] + modification_time = stat[ST_MTIME] + modification_time = modification_time + 1000 + os.utime(path_to_file, (access_time, modification_time)) + + +def modify_file_owner(path, name): + """Change the owner of a file. The new owner will be '1'. + + On Windows, uid will always be 0. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + + def modify_file_owner_windows(): + cmd = f"takeown /S 127.0.0.1 /U {os.getlogin()} /F " + path_to_file + subprocess.call(cmd) + + def modify_file_owner_unix(): + os.chown(path_to_file, 1, -1) + + path_to_file = os.path.join(path, name) + logger.info("- Changing owner of " + str(path_to_file)) + + if sys.platform == 'win32': + modify_file_owner_windows() + else: + modify_file_owner_unix() + + +def modify_file_group(path, name): + """Change the group of a file. The new group will be '1'. + + Available for UNIX. On Windows, gid will always be 0 and the group name will be blank. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + if sys.platform == 'win32': + return + + path_to_file = os.path.join(path, name) + logger.info("- Changing group of " + str(path_to_file)) + os.chown(path_to_file, -1, 1) + + +def modify_file_permission(path, name): + """Change the permission of a file. + + On UNIX the new permissions will be '666'. + On Windows, a list of denied and allowed permissions will be given for each user or group since version 3.8.0. + Only works on NTFS partitions on Windows systems. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + + def modify_file_permission_windows(): + user, _, _ = win32sec.LookupAccountName(None, f"{platform.node()}\\{os.getlogin()}") + sd = win32sec.GetFileSecurity(path_to_file, win32sec.DACL_SECURITY_INFORMATION) + dacl = sd.GetSecurityDescriptorDacl() + dacl.AddAccessAllowedAce(win32sec.ACL_REVISION, ntc.FILE_ALL_ACCESS, user) + sd.SetSecurityDescriptorDacl(1, dacl, 0) + win32sec.SetFileSecurity(path_to_file, win32sec.DACL_SECURITY_INFORMATION, sd) + + def modify_file_permission_unix(): + os.chmod(path_to_file, 0o666) + + path_to_file = os.path.join(path, name) + + logger.info("- Changing permission of " + str(path_to_file)) + + if sys.platform == 'win32': + modify_file_permission_windows() + else: + modify_file_permission_unix() + + +def modify_file_inode(path, name): + """Change the inode of a file for Linux. + + On Windows, this function does nothing. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + if sys.platform == 'win32': + return + + logger.info("- Changing inode of " + str(os.path.join(path, name))) + inode_file = 'inodetmp' + path_to_file = os.path.join(path, name) + + shutil.copy2(path_to_file, os.path.join(tempfile.gettempdir(), inode_file)) + shutil.move(os.path.join(tempfile.gettempdir(), inode_file), path_to_file) + + +def modify_file_win_attributes(path, name): + """Change the attribute of a file in Windows + + On other OS, this function does nothing. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + """ + if sys.platform != 'win32': + return + + logger.info("- Changing win attributes of " + str(os.path.join(path, name))) + path_to_file = os.path.join(path, name) + win32api.SetFileAttributes(path_to_file, win32con.FILE_ATTRIBUTE_HIDDEN) + + +def modify_file(path, name, new_content=None, is_binary=False): + """Modify a Regular file. + + Args: + path (str): path to the file to be modified. + name (str): name of the file to be modified. + new_content (str, optional): new content to add to the file. Defaults `None`. + is_binary: (boolean, optional): True if the file is binary. False otherwise. Defaults `False` + """ + logger.info("Modifying file " + str(os.path.join(path, name))) + modify_file_inode(path, name) + modify_file_content(path, name, new_content, is_binary) + modify_file_mtime(path, name) + modify_file_owner(path, name) + modify_file_group(path, name) + modify_file_permission(path, name) + modify_file_win_attributes(path, name) diff --git a/deps/wazuh_testing/wazuh_testing/tools/migration_tool/CVE_JSON_5.0_bundled.json b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/CVE_JSON_5.0_bundled.json new file mode 100644 index 0000000000..4cf417a3ec --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/CVE_JSON_5.0_bundled.json @@ -0,0 +1,2035 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://cve.org/cve/record/v5_00/", + "type": "object", + "title": "CVE JSON record format", + "description": "cve-schema specifies the CVE JSON record format. This is the blueprint for a rich set of JSON data that can be submitted by CVE Numbering Authorities (CNAs) and Authorized Data Publishers (ADPs) to describe a CVE Record. Some examples of CVE Record data include CVE ID number, affected product(s), affected version(s), and public references. While those specific items are required when assigning a CVE, there are many other optional data in the schema that can be used to enrich CVE Records for community benefit. Learn more about the CVE program at [the official website](https://cve.mitre.org). This CVE JSON record format is defined using JSON Schema. Learn more about JSON Schema [here](https://json-schema.org/).", + "definitions": { + "uriType": { + "description": "A universal resource identifier (URI), according to [RFC 3986](https://tools.ietf.org/html/rfc3986).", + "type": "string", + "format": "uri", + "minLength": 1, + "maxLength": 2048 + }, + "uuidType": { + "description": "A version 4 (random) universally unique identifier (UUID) as defined by [RFC 4122](https://tools.ietf.org/html/rfc4122#section-4.1.3).", + "type": "string", + "pattern": "^[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-4[0-9A-Fa-f]{3}-[89ABab][0-9A-Fa-f]{3}-[0-9A-Fa-f]{12}$" + }, + "reference": { + "type": "object", + "required": [ + "url" + ], + "properties": { + "url": { + "description": "The uniform resource locator (URL), according to [RFC 3986](https://tools.ietf.org/html/rfc3986#section-1.1.3), that can be used to retrieve the referenced resource.", + "$ref": "#/definitions/uriType" + }, + "name": { + "description": "User created name for the reference, often the title of the page.", + "type": "string", + "maxLength": 512, + "minLength": 1 + }, + "tags": { + "description": "An array of one or more tags that describe the resource referenced by 'url'.", + "type": "array", + "minItems": 1, + "uniqueItems": true, + "items": { + "oneOf": [ + { + "$ref": "#/definitions/tagExtension" + }, + { + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://cve.mitre.org/cve/v5_00/tags/reference/", + "type": "string", + "description": "broken-link: The reference link is returning a 404 error, or the site is no longer online.\n\ncustomer-entitlement: Similar to Privileges Required, but specific to references that require non-public/paid access for customers of the particular vendor.\n\nexploit: Reference contains an in-depth/detailed description of steps to exploit a vulnerability OR the reference contains any legitimate Proof of Concept (PoC) code or exploit kit.\n\ngovernment-resource: All reference links that are from a government agency or organization should be given the Government Resource tag.\n\nissue-tracking: The reference is a post from a bug tracking tool such as MantisBT, Bugzilla, JIRA, Github Issues, etc...\n\nmailing-list: The reference is from a mailing list -- often specific to a product or vendor.\n\nmitigation: The reference contains information on steps to mitigate against the vulnerability in the event a patch can't be applied or is unavailable or for EOL product situations.\n\nnot-applicable: The reference link is not applicable to the vulnerability and was likely associated by MITRE accidentally (should be used sparingly).\n\npatch: The reference contains an update to the software that fixes the vulnerability.\n\npermissions-required: The reference link provided is blocked by a logon page. If credentials are required to see any information this tag must be applied.\n\nmedia-coverage: The reference is from a media outlet such as a newspaper, magazine, social media, or weblog. This tag is not intended to apply to any individual's personal social media account. It is strictly intended for public media entities.\n\nproduct: A reference appropriate for describing a product for the purpose of CPE or SWID.\n\nrelated: A reference that is for a related (but not the same) vulnerability.\n\nrelease-notes: The reference is in the format of a vendor or open source project's release notes or change log.\n\nsignature: The reference contains a method to detect or prevent the presence or exploitation of the vulnerability.\n\ntechnical-description: The reference contains in-depth technical information about a vulnerability and its exploitation process, typically in the form of a presentation or whitepaper.\n\nthird-party-advisory: Advisory is from an organization that is not the vulnerable product's vendor/publisher/maintainer.\n\nvendor-advisory: Advisory is from the vendor/publisher/maintainer of the product or the parent organization.\n\nvdb-entry: VDBs are loosely defined as sites that provide information about this vulnerability, such as advisories, with identifiers. Included VDBs are free to access, substantially public, and have broad scope and coverage (not limited to a single vendor or research organization). See: https://www.first.org/global/sigs/vrdx/vdb-catalog", + "enum": [ + "broken-link", + "customer-entitlement", + "exploit", + "government-resource", + "issue-tracking", + "mailing-list", + "mitigation", + "not-applicable", + "patch", + "permissions-required", + "media-coverage", + "product", + "related", + "release-notes", + "signature", + "technical-description", + "third-party-advisory", + "vendor-advisory", + "vdb-entry" + ] + } + ] + } + } + } + }, + "cveId": { + "type": "string", + "pattern": "^CVE-[0-9]{4}-[0-9]{4,19}$" + }, + "orgId": { + "description": "A UUID for an organization participating in the CVE program. This UUID can be used to lookup the organization record in the user registry service.", + "$ref": "#/definitions/uuidType" + }, + "userId": { + "description": "A UUID for a user participating in the CVE program. This UUID can be used to lookup the user record in the user registry service.", + "$ref": "#/definitions/uuidType" + }, + "shortName": { + "description": "A 2-32 character name that can be used to complement an organization's UUID.", + "type": "string", + "minLength": 2, + "maxLength": 32 + }, + "datestamp": { + "description": "Date/time format based on RFC3339 and ISO ISO8601.", + "type": "string", + "format": "date", + "pattern": "^((2000|2400|2800|(19|2[0-9](0[48]|[2468][048]|[13579][26])))-02-29)|(((19|2[0-9])[0-9]{2})-02-(0[1-9]|1[0-9]|2[0-8]))|(((19|2[0-9])[0-9]{2})-(0[13578]|10|12)-(0[1-9]|[12][0-9]|3[01]))|(((19|2[0-9])[0-9]{2})-(0[469]|11)-(0[1-9]|[12][0-9]|30))$" + }, + "timestamp": { + "type": "string", + "format": "date-time", + "description": "Date/time format based on RFC3339 and ISO ISO8601, with an optional timezone in the format 'yyyy-MM-ddTHH:mm:ssZZZZ'. If timezone offset is not given, GMT (0000) is assumed.", + "pattern": "^(((2000|2400|2800|(19|2[0-9](0[48]|[2468][048]|[13579][26])))-02-29)|(((19|2[0-9])[0-9]{2})-02-(0[1-9]|1[0-9]|2[0-8]))|(((19|2[0-9])[0-9]{2})-(0[13578]|10|12)-(0[1-9]|[12][0-9]|3[01]))|(((19|2[0-9])[0-9]{2})-(0[469]|11)-(0[1-9]|[12][0-9]|30)))T(2[0-3]|[01][0-9]):([0-5][0-9]):([0-5][0-9])(\\.[0-9]+)?(Z|[+-][0-9]{2}:[0-9]{2})?$" + }, + "version": { + "description": "A single version of a product, as expressed in its own version numbering scheme.", + "type": "string", + "minLength": 1, + "maxLength": 1024 + }, + "status": { + "description": "The vulnerability status of a given version or range of versions of a product. The statuses 'affected' and 'unaffected' indicate that the version is affected or unaffected by the vulnerability. The status 'unknown' indicates that it is unknown or unspecified whether the given version is affected. There can be many reasons for an 'unknown' status, including that an investigation has not been undertaken or that a vendor has not disclosed the status.", + "type": "string", + "enum": [ + "affected", + "unaffected", + "unknown" + ] + }, + "product": { + "type": "object", + "description": "Provides information about the set of products and services affected by this vulnerability.", + "allOf": [ + { + "anyOf": [ + { + "required": [ + "vendor", + "product" + ] + }, + { + "required": [ + "collectionURL", + "packageName" + ] + } + ] + }, + { + "anyOf": [ + { + "required": [ + "versions" + ] + }, + { + "required": [ + "defaultStatus" + ] + } + ] + } + ], + "properties": { + "vendor": { + "type": "string", + "description": "Name of the organization, project, community, individual, or user that created or maintains this product or hosted service. Can be 'N/A' if none of those apply. When collectionURL and packageName are used, this field may optionally represent the user or account within the package collection associated with the package.", + "minLength": 1, + "maxLength": 512 + }, + "product": { + "type": "string", + "description": "Name of the affected product.", + "minLength": 1, + "maxLength": 2048 + }, + "collectionURL": { + "description": "URL identifying a package collection (determines the meaning of packageName).", + "$ref": "#/definitions/uriType", + "examples": [ + "https://access.redhat.com/downloads/content/package-browser", + "https://addons.mozilla.org", + "https://addons.thunderbird.net", + "https://anaconda.org/anaconda/repo", + "https://app.vagrantup.com/boxes/search", + "https://apps.apple.com", + "https://archlinux.org/packages", + "https://atmospherejs.meteor.com", + "https://atom.io/packages", + "https://bitbucket.org", + "https://bower.io", + "https://brew.sh/", + "https://chocolatey.org/packages", + "https://chrome.google.com/webstore", + "https://clojars.org", + "https://cocoapods.org", + "https://code.dlang.org", + "https://conan.io/center", + "https://cpan.org/modules", + "https://cran.r-project.org", + "https://crates.io", + "https://ctan.org/pkg", + "https://drupal.org", + "https://exchange.adobe.com", + "https://forge.puppet.com/modules", + "https://github.com", + "https://gitlab.com/explore", + "https://golang.org/pkg", + "https://guix.gnu.org/packages", + "https://hackage.haskell.org", + "https://helm.sh", + "https://hub.docker.com", + "https://juliahub.com", + "https://lib.haxe.org", + "https://luarocks.org", + "https://marketplace.visualstudio.com", + "https://melpa.org", + "https://microsoft.com/en-us/store/apps", + "https://nimble.directory", + "https://nuget.org/packages", + "https://opam.ocaml.org/packages", + "https://openwrt.org/packages/index", + "https://package.elm-lang.org", + "https://packagecontrol.io", + "https://packages.debian.org", + "https://packages.gentoo.org", + "https://packagist.org", + "https://pear.php.net/packages.php", + "https://pecl.php.net", + "https://platformio.org/lib", + "https://play.google.com/store", + "https://plugins.gradle.org", + "https://projects.eclipse.org", + "https://pub.dev", + "https://pypi.python.org", + "https://registry.npmjs.org", + "https://registry.terraform.io", + "https://repo.hex.pm", + "https://repo.maven.apache.org/maven2", + "https://rubygems.org", + "https://search.nixos.org/packages", + "https://sourceforge.net", + "https://wordpress.org/plugins" + ] + }, + "packageName": { + "type": "string", + "description": "Name or identifier of the affected software package as used in the package collection.", + "minLength": 1, + "maxLength": 2048 + }, + "cpes": { + "type": "array", + "description": "Affected products defined by CPE. This is an array of CPE values (vulnerable and not), we use an array so that we can make multiple statements about the same version and they are separate (if we used a JSON object we'd essentially be keying on the CPE name and they would have to overlap). Also, this allows things like cveDataVersion or cveDescription to be applied directly to the product entry. This also allows more complex statements such as \"Product X between versions 10.2 and 10.8\" to be put in a machine-readable format. As well since multiple statements can be used multiple branches of the same product can be defined here.", + "uniqueItems": true, + "items": { + "title": "CPE Name", + "type": "string", + "description": "Common Platform Enumeration (CPE) Name in either 2.2 or 2.3 format", + "pattern": "([c][pP][eE]:/[AHOaho]?(:[A-Za-z0-9._\\-~%]*){0,6})|(cpe:2\\.3:[aho*\\-](:(((\\?*|\\*?)([a-zA-Z0-9\\-._]|(\\\\[\\\\*?!\"#$%&'()+,/:;<=>@\\[\\]\\^`{|}~]))+(\\?*|\\*?))|[*\\-])){5}(:(([a-zA-Z]{2,3}(-([a-zA-Z]{2}|[0-9]{3}))?)|[*\\-]))(:(((\\?*|\\*?)([a-zA-Z0-9\\-._]|(\\\\[\\\\*?!\"#$%&'()+,/:;<=>@\\[\\]\\^`{|}~]))+(\\?*|\\*?))|[*\\-])){4})", + "minLength": 1, + "maxLength": 2048 + } + }, + "modules": { + "type": "array", + "description": "A list of the affected components, features, modules, sub-components, sub-products, APIs, commands, utilities, programs, or functionalities (optional).", + "uniqueItems": true, + "items": { + "type": "string", + "description": "Name of the affected component, feature, module, sub-component, sub-product, API, command, utility, program, or functionality (optional).", + "minLength": 1, + "maxLength": 4096 + } + }, + "programFiles": { + "type": "array", + "description": "A list of the affected source code files (optional).", + "uniqueItems": true, + "items": { + "description": "Name or path or location of the affected source code file.", + "type": "string", + "minLength": 1, + "maxLength": 1024 + } + }, + "programRoutines": { + "type": "array", + "description": "A list of the affected source code functions, methods, subroutines, or procedures (optional).", + "uniqueItems": true, + "items": { + "type": "object", + "description": "An object describing program routine.", + "required": [ + "name" + ], + "properties": { + "name": { + "type": "string", + "description": "Name of the affected source code file, function, method, subroutine, or procedure.", + "minLength": 1, + "maxLength": 4096 + } + } + } + }, + "platforms": { + "title": "Platforms", + "description": "List of specific platforms if the vulnerability is only relevant in the context of these platforms (optional). Platforms may include execution environments, operating systems, virtualization technologies, hardware models, or computing architectures. The lack of this field or an empty array implies that the other fields are applicable to all relevant platforms.", + "type": "array", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "string", + "examples": [ + "iOS", + "Android", + "Windows", + "macOS", + "x86", + "ARM", + "64 bit", + "Big Endian", + "iPad", + "Chromebook", + "Docker", + "Model T" + ], + "maxLength": 1024 + } + }, + "repo": { + "description": "The URL of the source code repository, for informational purposes and/or to resolve git hash version ranges.", + "$ref": "#/definitions/uriType" + }, + "defaultStatus": { + "description": "The default status for versions that are not otherwise listed in the versions list. If not specified, defaultStatus defaults to 'unknown'. Versions or defaultStatus may be omitted, but not both.", + "$ref": "#/definitions/status" + }, + "versions": { + "type": "array", + "description": "Set of product versions or version ranges related to the vulnerability. The versions satisfy the CNA Rules [8.1.2 requirement](https://cve.mitre.org/cve/cna/rules.html#section_8-1_cve_entry_information_requirements). Versions or defaultStatus may be omitted, but not both.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "A single version or a range of versions, with vulnerability status.\n\nAn entry with only 'version' and 'status' indicates the status of a single version.\n\nOtherwise, an entry describes a range; it must include the 'versionType' property, to define the version numbering semantics in use, and 'limit', to indicate the non-inclusive upper limit of the range. The object describes the status for versions V such that 'version' <= V and V < 'limit', using the <= and < semantics defined for the specific kind of 'versionType'. Status changes within the range can be specified by an optional 'changes' list.\n\nThe algorithm to decide the status specified for a version V is:\n\n\tfor entry in product.versions {\n\t\tif entry.lessThan is not present and entry.lessThanOrEqual is not present and v == entry.version {\n\t\t\treturn entry.status\n\t\t}\n\t\tif (entry.lessThan is present and entry.version <= v and v < entry.lessThan) or\n\t\t (entry.lessThanOrEqual is present and entry.version <= v and v <= entry.lessThanOrEqual) { // <= and < defined by entry.versionType\n\t\t\tstatus = entry.status\n\t\t\tfor change in entry.changes {\n\t\t\t\tif change.at <= v {\n\t\t\t\t\tstatus = change.status\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn status\n\t\t}\n\t}\n\treturn product.defaultStatus\n\n.", + "oneOf": [ + { + "required": [ + "version", + "status" + ], + "maxProperties": 2 + }, + { + "required": [ + "version", + "status", + "versionType" + ], + "oneOf": [ + { + "required": [ + "lessThan" + ] + }, + { + "required": [ + "lessThanOrEqual" + ] + } + ] + } + ], + "properties": { + "version": { + "description": "The single version being described, or the version at the start of the range. By convention, typically 0 denotes the earliest possible version.", + "$ref": "#/definitions/version" + }, + "status": { + "description": "The vulnerability status for the version or range of versions. For a range, the status may be refined by the 'changes' list.", + "$ref": "#/definitions/status" + }, + "versionType": { + "type": "string", + "description": "The version numbering system used for specifying the range. This defines the exact semantics of the comparison (less-than) operation on versions, which is required to understand the range itself. 'Custom' indicates that the version type is unspecified and should be avoided whenever possible. It is included primarily for use in conversion of older data files.", + "minLength": 1, + "maxLength": 128, + "examples": [ + "custom", + "git", + "maven", + "python", + "rpm", + "semver" + ] + }, + "lessThan": { + "description": "The non-inclusive upper limit of the range. This is the least version NOT in the range. The usual version syntax is expanded to allow a pattern to end in an asterisk `(*)`, indicating an arbitrarily large number in the version ordering. For example, `{version: 1.0 lessThan: 1.*}` would describe the entire 1.X branch for most range kinds, and `{version: 2.0, lessThan: *}` describes all versions starting at 2.0, including 3.0, 5.1, and so on. Only one of lessThan and lessThanOrEqual should be specified.", + "$ref": "#/definitions/version" + }, + "lessThanOrEqual": { + "description": "The inclusive upper limit of the range. This is the greatest version contained in the range. Only one of lessThan and lessThanOrEqual should be specified. For example, `{version: 1.0, lessThanOrEqual: 1.3}` covers all versions from 1.0 up to and including 1.3.", + "$ref": "#/definitions/version" + }, + "changes": { + "type": "array", + "description": "A list of status changes that take place during the range. The array should be sorted in increasing order by the 'at' field, according to the versionType, but clients must re-sort the list themselves rather than assume it is sorted.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "The start of a single status change during the range.", + "required": [ + "at", + "status" + ], + "properties": { + "at": { + "description": "The version at which a status change occurs.", + "$ref": "#/definitions/version" + }, + "status": { + "description": "The new status in the range starting at the given version.", + "$ref": "#/definitions/status" + } + } + } + } + } + } + } + } + }, + "dataType": { + "description": "Indicates the type of information represented in the JSON instance.", + "type": "string", + "enum": [ + "CVE_RECORD" + ] + }, + "dataVersion": { + "description": "The version of the schema being used. Used to support multiple versions of this format.", + "type": "string", + "enum": [ + "5.0" + ] + }, + "cveMetadataPublished": { + "description": "This is meta data about the CVE ID such as the CVE ID, who requested it, who assigned it, when it was requested, the current state (PUBLISHED, REJECTED, etc.) and so on. These fields are controlled by the CVE Services.", + "type": "object", + "required": [ + "cveId", + "assignerOrgId", + "state" + ], + "properties": { + "cveId": { + "description": "The CVE identifier that this record pertains to.", + "$ref": "#/definitions/cveId" + }, + "assignerOrgId": { + "$ref": "#/definitions/orgId", + "description": "The UUID for the organization to which the CVE ID was originally assigned. This UUID can be used to lookup the organization record in the user registry service." + }, + "assignerShortName": { + "$ref": "#/definitions/shortName", + "description": "The short name for the organization to which the CVE ID was originally assigned." + }, + "requesterUserId": { + "$ref": "#/definitions/userId", + "description": "The user that requested the CVE identifier." + }, + "dateUpdated": { + "description": "The date/time the record was last updated.", + "$ref": "#/definitions/timestamp" + }, + "serial": { + "type": "integer", + "minimum": 1, + "description": "The system of record causes this to start at 1, and increment by 1 each time a submission from a data provider changes this CVE Record. The incremented value moves to the Rejected schema upon a PUBLISHED->REJECTED transition, and moves to the Published schema upon a REJECTED->PUBLISHED transition." + }, + "dateReserved": { + "$ref": "#/definitions/timestamp", + "description": "The date/time this CVE ID was reserved in the CVE automation workgroup services system. Disclaimer: This date reflects when the CVE ID was reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE." + }, + "datePublished": { + "$ref": "#/definitions/timestamp", + "description": "The date/time the CVE Record was first published in the CVE List." + }, + "state": { + "description": "State of CVE - PUBLISHED, REJECTED.", + "type": "string", + "enum": [ + "PUBLISHED" + ] + } + }, + "additionalProperties": false + }, + "cveMetadataRejected": { + "type": "object", + "description": "This is meta data about the CVE ID such as the CVE ID, who requested it, who assigned it, when it was requested, the current state (PUBLISHED, REJECTED, etc.) and so on. These fields are controlled by the CVE Services.", + "required": [ + "cveId", + "assignerOrgId", + "state" + ], + "properties": { + "cveId": { + "description": "The CVE identifier that this record pertains to.", + "$ref": "#/definitions/cveId" + }, + "assignerOrgId": { + "$ref": "#/definitions/orgId", + "description": "The UUID for the organization to which the CVE ID was originally assigned." + }, + "assignerShortName": { + "$ref": "#/definitions/shortName", + "description": "The short name for the organization to which the CVE ID was originally assigned." + }, + "serial": { + "type": "integer", + "minimum": 1, + "description": "The system of record causes this to start at 1, and increment by 1 each time a submission from a data provider changes this CVE Record. The incremented value moves to the Rejected schema upon a PUBLISHED->REJECTED transition, and moves to the Published schema upon a REJECTED->PUBLISHED transition." + }, + "dateUpdated": { + "description": "The date/time the record was last updated.", + "$ref": "#/definitions/timestamp" + }, + "datePublished": { + "$ref": "#/definitions/timestamp", + "description": "The date/time the CVE Record was first published in the CVE List." + }, + "dateRejected": { + "$ref": "#/definitions/timestamp", + "description": "The date/time the CVE ID was rejected." + }, + "state": { + "type": "string", + "description": "State of CVE - PUBLISHED, REJECTED.", + "enum": [ + "REJECTED" + ] + }, + "dateReserved": { + "$ref": "#/definitions/timestamp", + "description": "The date/time this CVE ID was reserved in the CVE automation workgroup services system. Disclaimer: This date reflects when the CVE ID was reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE." + } + }, + "additionalProperties": false + }, + "providerMetadata": { + "type": "object", + "description": "Details related to the information container provider (CNA or ADP).", + "properties": { + "orgId": { + "$ref": "#/definitions/orgId", + "description": "The container provider's organizational UUID." + }, + "shortName": { + "$ref": "#/definitions/shortName", + "description": "The container provider's organizational short name." + }, + "dateUpdated": { + "$ref": "#/definitions/timestamp", + "description": "Timestamp to be set by the system of record at time of submission. If dateUpdated is provided to the system of record it will be replaced by the current timestamp at the time of submission." + } + }, + "required": [ + "orgId" + ] + }, + "cnaPublishedContainer": { + "description": "An object containing the vulnerability information provided by a CVE Numbering Authority (CNA) for a published CVE ID. There can only be one CNA container per CVE record since there can only be one assigning CNA. The CNA container must include the required information defined in the CVE Rules, which includes a product, version, problem type, prose description, and a reference.", + "type": "object", + "properties": { + "providerMetadata": { + "$ref": "#/definitions/providerMetadata" + }, + "dateAssigned": { + "$ref": "#/definitions/timestamp", + "description": "The date/time this CVE ID was associated with a vulnerability by a CNA." + }, + "datePublic": { + "$ref": "#/definitions/timestamp", + "description": "If known, the date/time the vulnerability was disclosed publicly." + }, + "title": { + "type": "string", + "description": "A title, headline, or a brief phrase summarizing the CVE record. Eg., Buffer overflow in Example Soft.", + "minLength": 1, + "maxLength": 256 + }, + "descriptions": { + "$ref": "#/definitions/descriptions" + }, + "affected": { + "$ref": "#/definitions/affected" + }, + "problemTypes": { + "$ref": "#/definitions/problemTypes" + }, + "references": { + "$ref": "#/definitions/references" + }, + "impacts": { + "$ref": "#/definitions/impacts" + }, + "metrics": { + "$ref": "#/definitions/metrics" + }, + "configurations": { + "$ref": "#/definitions/configurations" + }, + "workarounds": { + "$ref": "#/definitions/workarounds" + }, + "solutions": { + "$ref": "#/definitions/solutions" + }, + "exploits": { + "$ref": "#/definitions/exploits" + }, + "timeline": { + "$ref": "#/definitions/timeline" + }, + "credits": { + "$ref": "#/definitions/credits" + }, + "source": { + "$ref": "#/definitions/source" + }, + "tags": { + "$ref": "#/definitions/cnaTags" + }, + "taxonomyMappings": { + "$ref": "#/definitions/taxonomyMappings" + } + }, + "required": [ + "providerMetadata", + "descriptions", + "affected", + "references" + ], + "patternProperties": { + "^x_[^.]*$": {} + }, + "additionalProperties": false + }, + "cnaRejectedContainer": { + "description": "An object containing the vulnerability information provided by a CVE Numbering Authority (CNA) for a rejected CVE ID. There can only be one CNA container per CVE record since there can only be one assigning CNA.", + "type": "object", + "properties": { + "providerMetadata": { + "$ref": "#/definitions/providerMetadata" + }, + "rejectedReasons": { + "description": "Reasons for rejecting this CVE Record.", + "$ref": "#/definitions/descriptions" + }, + "replacedBy": { + "type": "array", + "description": "Contains an array of CVE IDs that this CVE ID was rejected in favor of because this CVE ID was assigned to the vulnerabilities.", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/cveId" + } + } + }, + "required": [ + "providerMetadata", + "rejectedReasons" + ], + "patternProperties": { + "^x_[^.]*$": {} + }, + "additionalProperties": false + }, + "adpContainer": { + "description": "An object containing the vulnerability information provided by an Authorized Data Publisher (ADP). Since multiple ADPs can provide information for a CVE ID, an ADP container must indicate which ADP is the source of the information in the object.", + "type": "object", + "properties": { + "providerMetadata": { + "$ref": "#/definitions/providerMetadata" + }, + "datePublic": { + "$ref": "#/definitions/timestamp", + "description": "If known, the date/time the vulnerability was disclosed publicly." + }, + "title": { + "type": "string", + "description": "A title, headline, or a brief phrase summarizing the information in an ADP container.", + "minLength": 1, + "maxLength": 256 + }, + "descriptions": { + "$ref": "#/definitions/descriptions" + }, + "affected": { + "$ref": "#/definitions/affected" + }, + "problemTypes": { + "$ref": "#/definitions/problemTypes" + }, + "references": { + "$ref": "#/definitions/references" + }, + "impacts": { + "$ref": "#/definitions/impacts" + }, + "metrics": { + "$ref": "#/definitions/metrics" + }, + "configurations": { + "$ref": "#/definitions/configurations" + }, + "workarounds": { + "$ref": "#/definitions/workarounds" + }, + "solutions": { + "$ref": "#/definitions/solutions" + }, + "exploits": { + "$ref": "#/definitions/exploits" + }, + "timeline": { + "$ref": "#/definitions/timeline" + }, + "credits": { + "$ref": "#/definitions/credits" + }, + "source": { + "$ref": "#/definitions/source" + }, + "tags": { + "$ref": "#/definitions/adpTags" + }, + "taxonomyMappings": { + "$ref": "#/definitions/taxonomyMappings" + } + }, + "required": [ + "providerMetadata" + ], + "minProperties": 2, + "patternProperties": { + "^x_[^.]*$": {} + }, + "additionalProperties": false + }, + "affected": { + "type": "array", + "description": "List of affected products.", + "minItems": 1, + "items": { + "$ref": "#/definitions/product" + } + }, + "description": { + "type": "object", + "description": "Text in a particular language with optional alternate markup or formatted representation (e.g., Markdown) or embedded media.", + "properties": { + "lang": { + "$ref": "#/definitions/language" + }, + "value": { + "type": "string", + "description": "Plain text description.", + "minLength": 1, + "maxLength": 4096 + }, + "supportingMedia": { + "type": "array", + "title": "Supporting media", + "description": "Supporting media data for the description such as markdown, diagrams, .. (optional). Similar to RFC 2397 each media object has three main parts: media type, media data value, and an optional boolean flag to indicate if the media data is base64 encoded.", + "uniqueItems": true, + "minItems": 1, + "items": { + "type": "object", + "properties": { + "type": { + "type": "string", + "title": "Media type", + "minLength": 1, + "maxLength": 256, + "description": "RFC2046 compliant IANA Media type for eg., text/markdown, text/html.", + "examples": [ + "text/markdown", + "text/html", + "image/png", + "image/svg", + "audio/mp3" + ] + }, + "base64": { + "type": "boolean", + "title": "Encoding", + "description": "If true then the value field contains the media data encoded in base64. If false then the value field contains the UTF-8 media content.", + "default": false + }, + "value": { + "type": "string", + "description": "Supporting media content, up to 16K. If base64 is true, this field stores base64 encoded data.", + "minLength": 1, + "maxLength": 16384 + } + }, + "required": [ + "type", + "value" + ] + } + } + }, + "required": [ + "lang", + "value" + ], + "additionalProperties": false + }, + "englishLanguageDescription": { + "type": "object", + "description": "A description with lang set to an English language (en, en_US, en_UK, and so on).", + "properties": { + "lang": { + "$ref": "#/definitions/englishLanguage" + } + }, + "required": [ + "lang" + ] + }, + "descriptions": { + "type": "array", + "description": "A list of multi-lingual descriptions of the vulnerability. E.g., [PROBLEMTYPE] in [COMPONENT] in [VENDOR] [PRODUCT] [VERSION] on [PLATFORMS] allows [ATTACKER] to [IMPACT] via [VECTOR]. OR [COMPONENT] in [VENDOR] [PRODUCT] [VERSION] [ROOT CAUSE], which allows [ATTACKER] to [IMPACT] via [VECTOR].", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/description" + }, + "contains": { + "$ref": "#/definitions/englishLanguageDescription" + } + }, + "problemTypes": { + "type": "array", + "description": "This is problem type information (e.g. CWE identifier). Must contain: At least one entry, can be text, OWASP, CWE, please note that while only one is required you can use more than one (or indeed all three) as long as they are correct). (CNA requirement: [PROBLEMTYPE]).", + "items": { + "type": "object", + "required": [ + "descriptions" + ], + "properties": { + "descriptions": { + "type": "array", + "items": { + "type": "object", + "required": [ + "lang", + "description" + ], + "properties": { + "lang": { + "$ref": "#/definitions/language" + }, + "description": { + "type": "string", + "description": "Text description of problemType, or title from CWE or OWASP.", + "minLength": 1, + "maxLength": 4096 + }, + "cweId": { + "type": "string", + "description": "CWE ID of the CWE that best describes this problemType entry.", + "minLength": 5, + "maxLength": 9, + "pattern": "^CWE-[1-9][0-9]*$" + }, + "type": { + "type": "string", + "description": "Problemtype source, text, OWASP, CWE, etc.,", + "minLength": 1, + "maxLength": 128 + }, + "references": { + "$ref": "#/definitions/references" + } + } + }, + "minItems": 1, + "uniqueItems": true + } + } + }, + "minItems": 1, + "uniqueItems": true + }, + "references": { + "type": "array", + "description": "This is reference data in the form of URLs or file objects (uuencoded and embedded within the JSON file, exact format to be decided, e.g. we may require a compressed format so the objects require unpacking before they are \"dangerous\").", + "items": { + "$ref": "#/definitions/reference" + }, + "minItems": 1, + "maxItems": 512, + "uniqueItems": true + }, + "impacts": { + "type": "array", + "description": "Collection of impacts of this vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "This is impact type information (e.g. a text description.", + "required": [ + "descriptions" + ], + "properties": { + "capecId": { + "type": "string", + "description": "CAPEC ID that best relates to this impact.", + "minLength": 7, + "maxLength": 11, + "pattern": "^CAPEC-[1-9][0-9]{0,4}$" + }, + "descriptions": { + "description": "Prose description of the impact scenario. At a minimum provide the description given by CAPEC.", + "$ref": "#/definitions/descriptions" + } + } + } + }, + "metrics": { + "type": "array", + "description": "Collection of impact scores with attribution.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "This is impact type information (e.g. a text description, CVSSv2, CVSSv3, etc.). Must contain: At least one entry, can be text, CVSSv2, CVSSv3, others may be added.", + "anyOf": [ + { + "required": [ + "cvssV3_1" + ] + }, + { + "required": [ + "cvssV3_0" + ] + }, + { + "required": [ + "cvssV2_0" + ] + }, + { + "required": [ + "other" + ] + } + ], + "properties": { + "format": { + "type": "string", + "description": "Name of the scoring format. This provides a bit of future proofing. Additional properties are not prohibited, so this will support the inclusion of proprietary formats. It also provides an easy future conversion mechanism when future score formats become part of the schema. example: cvssV44, format = 'cvssV44', other = cvssV4_4 JSON object. In the future, the other properties can be converted to score properties when they become part of the schema.", + "minLength": 1, + "maxLength": 64 + }, + "scenarios": { + "type": "array", + "description": "Description of the scenarios this metrics object applies to. If no specific scenario is given, GENERAL is used as the default and applies when no more specific metric matches.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "properties": { + "lang": { + "$ref": "#/definitions/language" + }, + "value": { + "type": "string", + "default": "GENERAL", + "description": "Description of the scenario this metrics object applies to. If no specific scenario is given, GENERAL is used as the default and applies when no more specific metric matches.", + "minLength": 1, + "maxLength": 4096 + } + }, + "required": [ + "lang", + "value" + ] + } + }, + "cvssV3_1": { + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "JSON Schema for Common Vulnerability Scoring System version 3.1", + "type": "object", + "definitions": { + "attackVectorType": { + "type": "string", + "enum": [ + "NETWORK", + "ADJACENT_NETWORK", + "LOCAL", + "PHYSICAL" + ] + }, + "modifiedAttackVectorType": { + "type": "string", + "enum": [ + "NETWORK", + "ADJACENT_NETWORK", + "LOCAL", + "PHYSICAL", + "NOT_DEFINED" + ] + }, + "attackComplexityType": { + "type": "string", + "enum": [ + "HIGH", + "LOW" + ] + }, + "modifiedAttackComplexityType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NOT_DEFINED" + ] + }, + "privilegesRequiredType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NONE" + ] + }, + "modifiedPrivilegesRequiredType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NONE", + "NOT_DEFINED" + ] + }, + "userInteractionType": { + "type": "string", + "enum": [ + "NONE", + "REQUIRED" + ] + }, + "modifiedUserInteractionType": { + "type": "string", + "enum": [ + "NONE", + "REQUIRED", + "NOT_DEFINED" + ] + }, + "scopeType": { + "type": "string", + "enum": [ + "UNCHANGED", + "CHANGED" + ] + }, + "modifiedScopeType": { + "type": "string", + "enum": [ + "UNCHANGED", + "CHANGED", + "NOT_DEFINED" + ] + }, + "ciaType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "HIGH" + ] + }, + "modifiedCiaType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "HIGH", + "NOT_DEFINED" + ] + }, + "exploitCodeMaturityType": { + "type": "string", + "enum": [ + "UNPROVEN", + "PROOF_OF_CONCEPT", + "FUNCTIONAL", + "HIGH", + "NOT_DEFINED" + ] + }, + "remediationLevelType": { + "type": "string", + "enum": [ + "OFFICIAL_FIX", + "TEMPORARY_FIX", + "WORKAROUND", + "UNAVAILABLE", + "NOT_DEFINED" + ] + }, + "confidenceType": { + "type": "string", + "enum": [ + "UNKNOWN", + "REASONABLE", + "CONFIRMED", + "NOT_DEFINED" + ] + }, + "ciaRequirementType": { + "type": "string", + "enum": [ + "LOW", + "MEDIUM", + "HIGH", + "NOT_DEFINED" + ] + }, + "scoreType": { + "type": "number", + "minimum": 0, + "maximum": 10 + }, + "severityType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "MEDIUM", + "HIGH", + "CRITICAL" + ] + } + }, + "properties": { + "version": { + "description": "CVSS Version", + "type": "string", + "enum": [ + "3.1" + ] + }, + "vectorString": { + "type": "string", + "pattern": "^CVSS:3[.]1/((AV:[NALP]|AC:[LH]|PR:[NLH]|UI:[NR]|S:[UC]|[CIA]:[NLH]|E:[XUPFH]|RL:[XOTWU]|RC:[XURC]|[CIA]R:[XLMH]|MAV:[XNALP]|MAC:[XLH]|MPR:[XNLH]|MUI:[XNR]|MS:[XUC]|M[CIA]:[XNLH])/)*(AV:[NALP]|AC:[LH]|PR:[NLH]|UI:[NR]|S:[UC]|[CIA]:[NLH]|E:[XUPFH]|RL:[XOTWU]|RC:[XURC]|[CIA]R:[XLMH]|MAV:[XNALP]|MAC:[XLH]|MPR:[XNLH]|MUI:[XNR]|MS:[XUC]|M[CIA]:[XNLH])$" + }, + "attackVector": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/attackVectorType" + }, + "attackComplexity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/attackComplexityType" + }, + "privilegesRequired": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/privilegesRequiredType" + }, + "userInteraction": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/userInteractionType" + }, + "scope": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/scopeType" + }, + "confidentialityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaType" + }, + "integrityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaType" + }, + "availabilityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaType" + }, + "baseScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/scoreType" + }, + "baseSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/severityType" + }, + "exploitCodeMaturity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/exploitCodeMaturityType" + }, + "remediationLevel": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/remediationLevelType" + }, + "reportConfidence": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/confidenceType" + }, + "temporalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/scoreType" + }, + "temporalSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/severityType" + }, + "confidentialityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaRequirementType" + }, + "integrityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaRequirementType" + }, + "availabilityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/ciaRequirementType" + }, + "modifiedAttackVector": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedAttackVectorType" + }, + "modifiedAttackComplexity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedAttackComplexityType" + }, + "modifiedPrivilegesRequired": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedPrivilegesRequiredType" + }, + "modifiedUserInteraction": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedUserInteractionType" + }, + "modifiedScope": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedScopeType" + }, + "modifiedConfidentialityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedCiaType" + }, + "modifiedIntegrityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedCiaType" + }, + "modifiedAvailabilityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/modifiedCiaType" + }, + "environmentalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/scoreType" + }, + "environmentalSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_1/definitions/severityType" + } + }, + "required": [ + "version", + "vectorString", + "baseScore", + "baseSeverity" + ] + }, + "cvssV3_0": { + "$schema": "http://json-schema.org/draft-04/schema#", + "title": "JSON Schema for Common Vulnerability Scoring System version 3.0", + "type": "object", + "definitions": { + "attackVectorType": { + "type": "string", + "enum": [ + "NETWORK", + "ADJACENT_NETWORK", + "LOCAL", + "PHYSICAL" + ] + }, + "modifiedAttackVectorType": { + "type": "string", + "enum": [ + "NETWORK", + "ADJACENT_NETWORK", + "LOCAL", + "PHYSICAL", + "NOT_DEFINED" + ] + }, + "attackComplexityType": { + "type": "string", + "enum": [ + "HIGH", + "LOW" + ] + }, + "modifiedAttackComplexityType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NOT_DEFINED" + ] + }, + "privilegesRequiredType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NONE" + ] + }, + "modifiedPrivilegesRequiredType": { + "type": "string", + "enum": [ + "HIGH", + "LOW", + "NONE", + "NOT_DEFINED" + ] + }, + "userInteractionType": { + "type": "string", + "enum": [ + "NONE", + "REQUIRED" + ] + }, + "modifiedUserInteractionType": { + "type": "string", + "enum": [ + "NONE", + "REQUIRED", + "NOT_DEFINED" + ] + }, + "scopeType": { + "type": "string", + "enum": [ + "UNCHANGED", + "CHANGED" + ] + }, + "modifiedScopeType": { + "type": "string", + "enum": [ + "UNCHANGED", + "CHANGED", + "NOT_DEFINED" + ] + }, + "ciaType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "HIGH" + ] + }, + "modifiedCiaType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "HIGH", + "NOT_DEFINED" + ] + }, + "exploitCodeMaturityType": { + "type": "string", + "enum": [ + "UNPROVEN", + "PROOF_OF_CONCEPT", + "FUNCTIONAL", + "HIGH", + "NOT_DEFINED" + ] + }, + "remediationLevelType": { + "type": "string", + "enum": [ + "OFFICIAL_FIX", + "TEMPORARY_FIX", + "WORKAROUND", + "UNAVAILABLE", + "NOT_DEFINED" + ] + }, + "confidenceType": { + "type": "string", + "enum": [ + "UNKNOWN", + "REASONABLE", + "CONFIRMED", + "NOT_DEFINED" + ] + }, + "ciaRequirementType": { + "type": "string", + "enum": [ + "LOW", + "MEDIUM", + "HIGH", + "NOT_DEFINED" + ] + }, + "scoreType": { + "type": "number", + "minimum": 0, + "maximum": 10 + }, + "severityType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "MEDIUM", + "HIGH", + "CRITICAL" + ] + } + }, + "properties": { + "version": { + "description": "CVSS Version", + "type": "string", + "enum": [ + "3.0" + ] + }, + "vectorString": { + "type": "string", + "pattern": "^CVSS:3[.]0/((AV:[NALP]|AC:[LH]|PR:[UNLH]|UI:[NR]|S:[UC]|[CIA]:[NLH]|E:[XUPFH]|RL:[XOTWU]|RC:[XURC]|[CIA]R:[XLMH]|MAV:[XNALP]|MAC:[XLH]|MPR:[XUNLH]|MUI:[XNR]|MS:[XUC]|M[CIA]:[XNLH])/)*(AV:[NALP]|AC:[LH]|PR:[UNLH]|UI:[NR]|S:[UC]|[CIA]:[NLH]|E:[XUPFH]|RL:[XOTWU]|RC:[XURC]|[CIA]R:[XLMH]|MAV:[XNALP]|MAC:[XLH]|MPR:[XUNLH]|MUI:[XNR]|MS:[XUC]|M[CIA]:[XNLH])$" + }, + "attackVector": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/attackVectorType" + }, + "attackComplexity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/attackComplexityType" + }, + "privilegesRequired": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/privilegesRequiredType" + }, + "userInteraction": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/userInteractionType" + }, + "scope": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/scopeType" + }, + "confidentialityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaType" + }, + "integrityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaType" + }, + "availabilityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaType" + }, + "baseScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/scoreType" + }, + "baseSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/severityType" + }, + "exploitCodeMaturity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/exploitCodeMaturityType" + }, + "remediationLevel": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/remediationLevelType" + }, + "reportConfidence": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/confidenceType" + }, + "temporalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/scoreType" + }, + "temporalSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/severityType" + }, + "confidentialityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaRequirementType" + }, + "integrityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaRequirementType" + }, + "availabilityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/ciaRequirementType" + }, + "modifiedAttackVector": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedAttackVectorType" + }, + "modifiedAttackComplexity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedAttackComplexityType" + }, + "modifiedPrivilegesRequired": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedPrivilegesRequiredType" + }, + "modifiedUserInteraction": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedUserInteractionType" + }, + "modifiedScope": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedScopeType" + }, + "modifiedConfidentialityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedCiaType" + }, + "modifiedIntegrityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedCiaType" + }, + "modifiedAvailabilityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/modifiedCiaType" + }, + "environmentalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/scoreType" + }, + "environmentalSeverity": { + "$ref": "#/definitions/metrics/items/properties/cvssV3_0/definitions/severityType" + } + }, + "required": [ + "version", + "vectorString", + "baseScore", + "baseSeverity" + ] + }, + "cvssV2_0": { + "$schema": "http://json-schema.org/draft-04/schema#", + "title": "JSON Schema for Common Vulnerability Scoring System version 2.0", + "type": "object", + "definitions": { + "accessVectorType": { + "type": "string", + "enum": [ + "NETWORK", + "ADJACENT_NETWORK", + "LOCAL" + ] + }, + "accessComplexityType": { + "type": "string", + "enum": [ + "HIGH", + "MEDIUM", + "LOW" + ] + }, + "authenticationType": { + "type": "string", + "enum": [ + "MULTIPLE", + "SINGLE", + "NONE" + ] + }, + "ciaType": { + "type": "string", + "enum": [ + "NONE", + "PARTIAL", + "COMPLETE" + ] + }, + "exploitabilityType": { + "type": "string", + "enum": [ + "UNPROVEN", + "PROOF_OF_CONCEPT", + "FUNCTIONAL", + "HIGH", + "NOT_DEFINED" + ] + }, + "remediationLevelType": { + "type": "string", + "enum": [ + "OFFICIAL_FIX", + "TEMPORARY_FIX", + "WORKAROUND", + "UNAVAILABLE", + "NOT_DEFINED" + ] + }, + "reportConfidenceType": { + "type": "string", + "enum": [ + "UNCONFIRMED", + "UNCORROBORATED", + "CONFIRMED", + "NOT_DEFINED" + ] + }, + "collateralDamagePotentialType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "LOW_MEDIUM", + "MEDIUM_HIGH", + "HIGH", + "NOT_DEFINED" + ] + }, + "targetDistributionType": { + "type": "string", + "enum": [ + "NONE", + "LOW", + "MEDIUM", + "HIGH", + "NOT_DEFINED" + ] + }, + "ciaRequirementType": { + "type": "string", + "enum": [ + "LOW", + "MEDIUM", + "HIGH", + "NOT_DEFINED" + ] + }, + "scoreType": { + "type": "number", + "minimum": 0, + "maximum": 10 + } + }, + "properties": { + "version": { + "description": "CVSS Version", + "type": "string", + "enum": [ + "2.0" + ] + }, + "vectorString": { + "type": "string", + "pattern": "^((AV:[NAL]|AC:[LMH]|Au:[MSN]|[CIA]:[NPC]|E:(U|POC|F|H|ND)|RL:(OF|TF|W|U|ND)|RC:(UC|UR|C|ND)|CDP:(N|L|LM|MH|H|ND)|TD:(N|L|M|H|ND)|[CIA]R:(L|M|H|ND))/)*(AV:[NAL]|AC:[LMH]|Au:[MSN]|[CIA]:[NPC]|E:(U|POC|F|H|ND)|RL:(OF|TF|W|U|ND)|RC:(UC|UR|C|ND)|CDP:(N|L|LM|MH|H|ND)|TD:(N|L|M|H|ND)|[CIA]R:(L|M|H|ND))$" + }, + "accessVector": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/accessVectorType" + }, + "accessComplexity": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/accessComplexityType" + }, + "authentication": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/authenticationType" + }, + "confidentialityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaType" + }, + "integrityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaType" + }, + "availabilityImpact": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaType" + }, + "baseScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/scoreType" + }, + "exploitability": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/exploitabilityType" + }, + "remediationLevel": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/remediationLevelType" + }, + "reportConfidence": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/reportConfidenceType" + }, + "temporalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/scoreType" + }, + "collateralDamagePotential": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/collateralDamagePotentialType" + }, + "targetDistribution": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/targetDistributionType" + }, + "confidentialityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaRequirementType" + }, + "integrityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaRequirementType" + }, + "availabilityRequirement": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/ciaRequirementType" + }, + "environmentalScore": { + "$ref": "#/definitions/metrics/items/properties/cvssV2_0/definitions/scoreType" + } + }, + "required": [ + "version", + "vectorString", + "baseScore" + ] + }, + "other": { + "type": "object", + "description": "A non-standard impact description, may be prose or JSON block.", + "required": [ + "type", + "content" + ], + "properties": { + "type": { + "description": "Name of the non-standard impact metrics format used.", + "type": "string", + "minLength": 1, + "maxLength": 128 + }, + "content": { + "type": "object", + "description": "JSON object not covered by another metrics format.", + "minProperties": 1 + } + } + } + } + } + }, + "configurations": { + "type": "array", + "description": "Configurations required for exploiting this vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/description" + } + }, + "workarounds": { + "type": "array", + "description": "Workarounds and mitigations for this vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/description" + } + }, + "solutions": { + "type": "array", + "description": "Information about solutions or remediations available for this vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/description" + } + }, + "exploits": { + "type": "array", + "description": "Information about exploits of the vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "$ref": "#/definitions/description" + } + }, + "timeline": { + "type": "array", + "description": "This is timeline information for significant events about this vulnerability or changes to the CVE Record.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "required": [ + "time", + "lang", + "value" + ], + "properties": { + "time": { + "description": "Timestamp representing when the event in the timeline occurred. The timestamp format is based on RFC3339 and ISO ISO8601, with an optional timezone. yyyy-MM-ddTHH:mm:ssZZZZ - if the timezone offset is not given, GMT (0000) is assumed.", + "$ref": "#/definitions/timestamp" + }, + "lang": { + "description": "The language used in the description of the event. The language field is included so that CVE Records can support translations. The value must be a BCP 47 language code.", + "$ref": "#/definitions/language" + }, + "value": { + "description": "A summary of the event.", + "type": "string", + "minLength": 1, + "maxLength": 4096 + } + } + } + }, + "credits": { + "type": "array", + "description": "Statements acknowledging specific people, organizations, or tools recognizing the work done in researching, discovering, remediating or helping with activities related to this CVE.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "properties": { + "lang": { + "description": "The language used when describing the credits. The language field is included so that CVE Records can support translations. The value must be a BCP 47 language code.", + "$ref": "#/definitions/language" + }, + "value": { + "type": "string", + "minLength": 1, + "maxLength": 4096 + }, + "user": { + "description": "UUID of the user being credited if present in the CVE User Registry (optional). This UUID can be used to lookup the user record in the user registry service.", + "$ref": "#/definitions/uuidType" + }, + "type": { + "type": "string", + "description": "Type or role of the entity being credited (optional). finder: identifies the vulnerability.\nreporter: notifies the vendor of the vulnerability to a CNA.\nanalyst: validates the vulnerability to ensure accuracy or severity.\ncoordinator: facilitates the coordinated response process.\nremediation developer: prepares a code change or other remediation plans.\nremediation reviewer: reviews vulnerability remediation plans or code changes for effectiveness and completeness.\nremediation verifier: tests and verifies the vulnerability or its remediation.\ntool: names of tools used in vulnerability discovery or identification.\nsponsor: supports the vulnerability identification or remediation activities.", + "default": "finder", + "enum": [ + "finder", + "reporter", + "analyst", + "coordinator", + "remediation developer", + "remediation reviewer", + "remediation verifier", + "tool", + "sponsor", + "other" + ] + } + }, + "required": [ + "lang", + "value" + ] + } + }, + "source": { + "type": "object", + "description": "This is the source information (who discovered it, who researched it, etc.) and optionally a chain of CNA information (e.g. the originating CNA and subsequent parent CNAs who have processed it before it arrives at the MITRE root).\n Must contain: IF this is in the root level it MUST contain a CNA_chain entry, IF this source entry is NOT in the root (e.g. it is part of a vendor statement) then it must contain at least one type of data entry.", + "minProperties": 1 + }, + "language": { + "type": "string", + "description": "BCP 47 language code, language-region.", + "default": "en", + "pattern": "^[A-Za-z]{2,4}([_-][A-Za-z]{4})?([_-]([A-Za-z]{2}|[0-9]{3}))?$" + }, + "englishLanguage": { + "type": "string", + "description": "BCP 47 language code, language-region, required to be English.", + "pattern": "^en([_-][A-Za-z]{4})?([_-]([A-Za-z]{2}|[0-9]{3}))?$" + }, + "taxonomyMappings": { + "type": "array", + "description": "List of taxonomy items related to the vulnerability.", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "", + "required": [ + "taxonomyName", + "taxonomyRelations" + ], + "properties": { + "taxonomyName": { + "type": "string", + "description": "The name of the taxonomy.", + "minLength": 1, + "maxLength": 128 + }, + "taxonomyVersion": { + "type": "string", + "description": "The version of taxonomy the identifiers come from.", + "minLength": 1, + "maxLength": 128 + }, + "taxonomyRelations": { + "type": "array", + "description": "", + "minItems": 1, + "uniqueItems": true, + "items": { + "type": "object", + "description": "List of relationships to the taxonomy for the vulnerability. Relationships can be between the taxonomy and the CVE or two taxonomy items.", + "required": [ + "taxonomyId", + "relationshipName", + "relationshipValue" + ], + "properties": { + "taxonomyId": { + "type": "string", + "description": "Identifier of the item in the taxonomy. Used as the subject of the relationship.", + "minLength": 1, + "maxLength": 2048 + }, + "relationshipName": { + "type": "string", + "description": "A description of the relationship.", + "minLength": 1, + "maxLength": 128 + }, + "relationshipValue": { + "type": "string", + "description": "The target of the relationship. Can be the CVE ID or another taxonomy identifier.", + "minLength": 1, + "maxLength": 2048 + } + } + } + } + } + } + }, + "tagExtension": { + "type": "string", + "minLength": 2, + "maxLength": 128, + "pattern": "^x_.*$" + }, + "cnaTags": { + "type": "array", + "description": "Tags provided by a CNA describing the CVE Record.", + "uniqueItems": true, + "minItems": 1, + "items": { + "oneOf": [ + { + "$ref": "#/definitions/tagExtension" + }, + { + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://cve.mitre.org/cve/v5_00/tags/cna/", + "type": "string", + "description": "exclusively-hosted-service: All known software and/or hardware affected by this CVE Record is known to exist only in the affected hosted service. If the vulnerability affects both hosted and on-prem software and/or hardware, then the tag should not be used.\n\nunsupported-when-assigned: Used by the assigning CNA to indicate that when a request for a CVE assignment was received, the product was already end-of-life (EOL) or a product or specific version was deemed not to be supported by the vendor. This tag should only be applied to a CVE Record when all affected products or version lines referenced in the CVE-Record are EOL.\n\ndisputed: When one party disagrees with another party's assertion that a particular issue in software is a vulnerability, a CVE Record assigned to that issue may be tagged as being 'disputed'.", + "enum": [ + "unsupported-when-assigned", + "exclusively-hosted-service", + "disputed" + ] + } + ] + } + }, + "adpTags": { + "type": "array", + "description": "Tags provided by an ADP describing the CVE Record.", + "uniqueItems": true, + "minItems": 1, + "items": { + "oneOf": [ + { + "$ref": "#/definitions/tagExtension" + }, + { + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "https://cve.mitre.org/cve/v5_00/tags/adp/", + "type": "string", + "description": "disputed: When one party disagrees with another party's assertion that a particular issue in software is a vulnerability, a CVE Record assigned to that issue may be tagged as being 'disputed'.", + "enum": [ + "disputed" + ] + } + ] + } + } + }, + "oneOf": [ + { + "title": "Published", + "description": "When a CNA populates the data associated with a CVE ID as a CVE Record, the state of the CVE Record is Published.", + "properties": { + "dataType": { + "$ref": "#/definitions/dataType" + }, + "dataVersion": { + "$ref": "#/definitions/dataVersion" + }, + "cveMetadata": { + "$ref": "#/definitions/cveMetadataPublished" + }, + "containers": { + "description": "A set of structures (called containers) used to store vulnerability information related to a specific CVE ID provided by a specific organization participating in the CVE program. Each container includes information provided by a different source.\n\nAt a minimum, a 'cna' container containing the vulnerability information provided by the CNA who initially assigned the CVE ID must be included.\n\nThere can only be one 'cna' container, as there can only be one assigning CNA. However, there can be multiple 'adp' containers, allowing multiple organizations participating in the CVE program to add additional information related to the vulnerability. For the most part, the 'cna' and 'adp' containers contain the same properties. The main differences are the source of the information. The 'cna' container requires the CNA to include certain fields, while the 'adp' container does not.", + "type": "object", + "properties": { + "cna": { + "$ref": "#/definitions/cnaPublishedContainer" + }, + "adp": { + "type": "array", + "items": { + "$ref": "#/definitions/adpContainer" + }, + "minItems": 1, + "uniqueItems": true + } + }, + "required": [ + "cna" + ], + "additionalProperties": false + } + }, + "required": [ + "dataType", + "dataVersion", + "cveMetadata", + "containers" + ], + "additionalProperties": false + }, + { + "title": "Rejected", + "description": "If the CVE ID and associated CVE Record should no longer be used, the CVE Record is placed in the Rejected state. A Rejected CVE Record remains on the CVE List so that users can know when it is invalid.", + "properties": { + "dataType": { + "$ref": "#/definitions/dataType" + }, + "dataVersion": { + "$ref": "#/definitions/dataVersion" + }, + "cveMetadata": { + "$ref": "#/definitions/cveMetadataRejected" + }, + "containers": { + "description": "A set of structures (called containers) used to store vulnerability information related to a specific CVE ID provided by a specific organization participating in the CVE program. Each container includes information provided by a different source.\n\nAt minimum, a 'cna' container containing the vulnerability information provided by the CNA who initially assigned the CVE ID must be included.\n\nThere can only be one 'cna' container, as there can only be one assigning CNA.", + "type": "object", + "properties": { + "cna": { + "$ref": "#/definitions/cnaRejectedContainer" + } + }, + "required": [ + "cna" + ], + "additionalProperties": false + } + }, + "required": [ + "dataType", + "dataVersion", + "cveMetadata", + "containers" + ], + "additionalProperties": false + } + ] +} diff --git a/deps/wazuh_testing/wazuh_testing/tools/migration_tool/__init__.py b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/__init__.py new file mode 100644 index 0000000000..f425b1045f --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/__init__.py @@ -0,0 +1,46 @@ +''' +Copyright (C) 2015-2023, Wazuh Inc. +Created by Wazuh, Inc. . +This program is free software; you can redistribute it and/or modify it under the terms +''' +import os + + +# Module variables +CVE5_SCHEMA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'CVE_JSON_5.0_bundled.json') +DELTA_SCHEMA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'delta_schema.json') +WORKING_DIR = '/var/wazuh' +MIGRATION_TOOL_PATH = f"{WORKING_DIR}/bin/content_migration" +MIGRATION_TOOL_LOG_PATH = f"{WORKING_DIR}/logs/content_migration.log" +GENERATED_FILES_DIR = f"{WORKING_DIR}/incoming" +SNAPSHOTS_DIR = f"{GENERATED_FILES_DIR}/snapshots" +DOWNLOADS_DIR = f"{GENERATED_FILES_DIR}/downloads" +UNCOMPRESSED_DIR = f"{GENERATED_FILES_DIR}/uncompressed" +MYSQL_CREDENTIALS = { + 'user': None, + 'password': None, + 'host': None, + 'port': None, + 'database': None +} + +# Callback messages +CB_PROCESS_STARTED = r'.+\[info\]\[Orchestrator - start\]: Starting process' +CB_FETCHING_STAGE_INITIALIZED = r'.+\[info\].+handleRequest\]: Starting fetch of .+' +CB_FETCHING_STAGE_FINISHED = r'.+\[info\].+fetch\]: Download done successfully' +CB_DECOMPRESSION_STAGE_INITIALIZED = r'.+\[info\].+handleRequest\]: Starting decompression of .+' +CB_PARSER_STAGE_INITIALIZED = r'.+\[info\].+Parser - handleRequest\]: Starting parse of .+' +CB_NORMALIZER_STAGE_INITIALIZED = r'.+\[info\]\[Normalizer.+ - handleRequest]: Starting process' +CB_DIFF_STAGE_INITIALIZED = r'.+\[info\]\[DiffEngine.+ - handleRequest\]: Starting process' +CB_DIFF_STAGE_FINISHED = r'.+\[info\]\[DiffEngine.+ - diffData\]: Created last snapshot: /var/wazuh/incoming/' +CB_PUBLISHER_STAGE_INITIALIZED = r'.+\[info\]\[DiffPublisher - handleRequest\]: Starting process. Configuration:' +CB_PROCESS_COMPLETED = r'.+Migration process completed successfully!' +CB_STAGES = [ + CB_PROCESS_STARTED, CB_FETCHING_STAGE_INITIALIZED, CB_FETCHING_STAGE_FINISHED, CB_DECOMPRESSION_STAGE_INITIALIZED, + CB_PARSER_STAGE_INITIALIZED, CB_NORMALIZER_STAGE_INITIALIZED, CB_DIFF_STAGE_INITIALIZED, CB_DIFF_STAGE_FINISHED, + CB_PUBLISHER_STAGE_INITIALIZED, CB_PROCESS_COMPLETED +] +CB_MIGRATION_SKIPPED = r'.+\[info\]\[MigrationStatusCheck.+\]: File already migrated. Stopping migration process.' +CB_REPORT_ERROR_MESSAGE = r'Remote exited with error' +CB_INVALID_CONFIG_MESSAGE = r'No valid configuration file was found at' +ERROR_MESSAGES = [CB_REPORT_ERROR_MESSAGE, CB_INVALID_CONFIG_MESSAGE] diff --git a/deps/wazuh_testing/wazuh_testing/tools/migration_tool/delta_schema.json b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/delta_schema.json new file mode 100644 index 0000000000..1921c1e697 --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/delta_schema.json @@ -0,0 +1,26 @@ + +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "$id": "/wazuh-content-deltas-schema-V1", + "type": "object", + "properties": { + "cve_id": { + "description": "The unique identifier of a vulnerability.", + "type": "string" + }, + "data_blob": { + "description": "The content of the delta.", + "type": "string" + }, + "data_hash": { + "description": "The hash of the delta calculated from the data_blob", + "type": "string" + }, + "operation": { + "description": "The operation to be executed in the DB.", + "type": "string", + "enum": ["insert", "update", "delete"] + } + }, + "required": [ "cve_id", "data_blob", "data_hash", "operation"] +} diff --git a/deps/wazuh_testing/wazuh_testing/tools/migration_tool/utils.py b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/utils.py new file mode 100644 index 0000000000..d74f117834 --- /dev/null +++ b/deps/wazuh_testing/wazuh_testing/tools/migration_tool/utils.py @@ -0,0 +1,190 @@ +''' +Copyright (C) 2015-2023, Wazuh Inc. +Created by Wazuh, Inc. . +This program is free software; you can redistribute it and/or modify it under the terms +''' +import glob +import json +import os +import subprocess as sbp + +import mysql.connector +from jsonschema import validate +from jsonschema.exceptions import ValidationError +from mysql.connector import errorcode +from wazuh_testing.tools.migration_tool import MIGRATION_TOOL_PATH, CVE5_SCHEMA_PATH, DELTA_SCHEMA_PATH, \ + ERROR_MESSAGES, SNAPSHOTS_DIR, DOWNLOADS_DIR, MIGRATION_TOOL_LOG_PATH, \ + MYSQL_CREDENTIALS, UNCOMPRESSED_DIR +from wazuh_testing.tools.file import delete_all_files_in_folder, read_json_file, truncate_file +from wazuh_testing.tools.logging import Logging + +logger = Logging('migration_tool') + + +def run_content_migration_tool(args='', config_files=None): + '''Run the Content Migration Tool with specified parameters and get the output. + + Args: + args (str): Arguments to be passed to the tool. For instance: '--debug' or '-w /tmp/workdir' + config_files (list): List of configuration files to be executed. + Returns: + output (str): Result of the tool execution if no error was thrown. + errors (str): Error output if the execution fails. + ''' + errors = '' + + def run_tool(cmd): + proc = sbp.Popen(cmd, shell=True, stdout=sbp.PIPE, stderr=sbp.PIPE) + out, err = proc.communicate() + + return out, err + + for file_path in config_files: + truncate_file(MIGRATION_TOOL_LOG_PATH) + command = ' '.join([MIGRATION_TOOL_PATH, '-i', file_path, args]) + output, _ = run_tool(command) + output = output.decode() + error_checker = [True for msg in ERROR_MESSAGES if msg in output] + if len(error_checker) > 0: + errors += f"\n{output}" + + return output, errors + + +def get_latest_delta_file(deltas_filepath): + '''Select the newest delta file generated (where the results are) from the list of all files. + + Args: + deltas_filepath (str): Path where the files are located. + Returns: + newest_file (str): Path of the newest file. + ''' + all_files = glob.glob(os.path.join(deltas_filepath, '*.delta.*')) + newest_file = max(all_files, key=os.path.getctime) + + return newest_file + + +def sanitize_configuration(configuration): + '''Normalize the configuration for it to be correctly processed and compatible. + + Args: + configuration (list): Test case configuration to be sanitized. + Returns: + configuration (list): Configuration normalized. + ''' + for configurations_obj in configuration: + configurations_list = configurations_obj['configurations'] + for config_obj in configurations_list: + for key in config_obj: + config_obj[key.lower()] = config_obj.pop(key) + + return configuration + + +def validate_json_against_schema(json_document, schema): + '''Validate a JSON document under the given schema. + + Args: + json_document (str): Path of the JSON document to be validated + schema (str): Path of the CVE5 Schema by default. + Returns: + result (bool): False if the validation thrown an error, True if not. + error (str): Error in the JSON document. + ''' + schema = read_json_file(schema) + + try: + validate(instance=json_document, schema=schema) + except ValidationError as err: + return False, err.message + + return True, '' + + +def validate_against_delta_schema(_elements): + '''Wrapper function. Validate a file with deltas under the Delta schema. + + Args: + _elements (dict): Python dictionary containing the data to be validated against the Delta schema. + ''' + _result = True + _errors = [] + for cve in _elements: + _result, _error = validate_json_against_schema(cve, DELTA_SCHEMA_PATH) + if _result is False: + _errors.append(_error) + + return _errors + + +def validate_against_cve5_schema(_elements): + '''Wrapper function. Validate a file with deltas under the CVE5 schema. + + Args: + _elements (dict): Python dictionary containing the data to be validated against the CVE5 schema. + ''' + _result = True + _errors = [] + + for cves in _elements: + data = json.loads(cves['data_blob']) + _result, _error = validate_json_against_schema(data, CVE5_SCHEMA_PATH) + if _result is False: + _errors.append(_error) + + return _errors + + +def query_publisher_db(query): + '''Function to query the DB created by Content Migration Tool. + Args: + query (str): Query to send to the DB. + Returns: + result (list): Query results, empty if no query was executed or no results were returned. + ''' + result = [] + + try: + connection = mysql.connector.connect( + host=MYSQL_CREDENTIALS['host'], + user=MYSQL_CREDENTIALS['user'], + password=MYSQL_CREDENTIALS['password'], + database=MYSQL_CREDENTIALS['database'] + ) + except mysql.connector.Error as error: + connection = None + if error.errno == errorcode.ER_ACCESS_DENIED_ERROR: + logger.error('Something is wrong with your user name or password') + elif error.errno == errorcode.ER_BAD_DB_ERROR: + logger.error('Database does not exist') + else: + logger.error(error) + + if connection is not None: + cursor = connection.cursor() + cursor.execute(query) + result = cursor.fetchall() + connection.close() + + return result + + +def clean_migration_tool_output_files(): + '''Remove all files generated by Content Migration Tool. + ''' + output_folders = [SNAPSHOTS_DIR, DOWNLOADS_DIR, UNCOMPRESSED_DIR] + vendors_folders = os.listdir(SNAPSHOTS_DIR) + for output_folder in output_folders: + for folder in vendors_folders: + folder = os.path.join(output_folder, folder) + delete_all_files_in_folder(folder) + + +def drop_migration_tool_tables(): + '''Remove the tables created by CMT. + ''' + tables = query_publisher_db('SHOW tables;') + for table in tables: + # `table` is a tuple with 1 element, so this one is selected + query_publisher_db(f"DROP TABLE {table[0]};") diff --git a/deps/wazuh_testing/wazuh_testing/tools/performance/csv_parser.py b/deps/wazuh_testing/wazuh_testing/tools/performance/csv_parser.py index 4f7c1a6866..25995bf28f 100644 --- a/deps/wazuh_testing/wazuh_testing/tools/performance/csv_parser.py +++ b/deps/wazuh_testing/wazuh_testing/tools/performance/csv_parser.py @@ -127,7 +127,7 @@ def _calculate_all_df_stats(self, dfs_dict): for phase in [self.SETUP_PHASE, self.STABLE_PHASE]: for file_name, file_df in files.items(): trimmed_df = self._trim_dataframe(file_df, phase, setup_datetime) - if len(trimmed_df): + if len(trimmed_df) and not (phase == self.STABLE_PHASE and file_name == 'integrity_sync'): result[phase][file_name][node] = self._calculate_stats(trimmed_df) return result diff --git a/deps/wazuh_testing/wazuh_testing/wazuh_variables.py b/deps/wazuh_testing/wazuh_testing/wazuh_variables.py deleted file mode 100644 index 7842698028..0000000000 --- a/deps/wazuh_testing/wazuh_testing/wazuh_variables.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (C) 2015-2021, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -''' -The purpose of this file is to contain all the variables necessary for Wazuh in order to be easier -to maintain if one of them changes in the future. -''' -# Local internal options -WINDOWS_DEBUG = 'windows.debug' -SYSCHECK_DEBUG = 'syscheck.debug' -VERBOSE_DEBUG_OUTPUT = 2 - -WAZUH_SERVICES_STOP = 'stop' -WAZUH_SERVICES_START = 'start' - - -# Configurations -DATA = 'data' -WAZUH_LOG_MONITOR = 'wazuh_log_monitor' diff --git a/requirements.txt b/requirements.txt index 846bd489af..336cfa1080 100644 --- a/requirements.txt +++ b/requirements.txt @@ -48,3 +48,5 @@ deepdiff==5.6.0; platform_system == "Linux" or platform_system=='Windows' libcst==0.3.23 ; python_version <= '3.6' treelib==1.6.1 prettytable; platform_system == "Linux" +mysql-connector-python==8.0.32; platform_system == 'Linux' and python_version >= '3.7' +protobuf>=3.11.0,<=3.20.3; platform_system == 'Linux' and python_version >= '3.7' diff --git a/tests/integration/conftest.py b/tests/integration/conftest.py index 99d1749ee4..0793ac144f 100644 --- a/tests/integration/conftest.py +++ b/tests/integration/conftest.py @@ -2,6 +2,7 @@ # Created by Wazuh, Inc. . # This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + import json import os import re @@ -17,7 +18,8 @@ import wazuh_testing.tools.configuration as conf from wazuh_testing import global_parameters, logger, ALERTS_JSON_PATH, ARCHIVES_LOG_PATH, ARCHIVES_JSON_PATH from wazuh_testing.logcollector import create_file_structure, delete_file_structure -from wazuh_testing.tools import LOG_FILE_PATH, WAZUH_CONF, get_service, ALERT_FILE_PATH, WAZUH_LOCAL_INTERNAL_OPTIONS +from wazuh_testing.tools import (PREFIX, LOG_FILE_PATH, WAZUH_CONF, get_service, ALERT_FILE_PATH, + WAZUH_LOCAL_INTERNAL_OPTIONS) from wazuh_testing.tools.configuration import get_wazuh_conf, set_section_wazuh_conf, write_wazuh_conf from wazuh_testing.tools.file import truncate_file, recursive_directory_creation, remove_file, copy, write_file from wazuh_testing.tools.monitoring import QueueMonitor, FileMonitor, SocketController, close_sockets @@ -1000,7 +1002,7 @@ def file_monitoring(request): logger.debug(f"Trucanted {file_to_monitor}") -@pytest.fixture(scope='function') +@pytest.fixture() def set_wazuh_configuration(configuration): """Set wazuh configuration @@ -1025,21 +1027,27 @@ def set_wazuh_configuration(configuration): conf.write_wazuh_conf(backup_config) -@pytest.fixture(scope='function') +@pytest.fixture() def truncate_monitored_files(): """Truncate all the log files and json alerts files before and after the test execution""" - log_files = [LOG_FILE_PATH, ALERT_FILE_PATH] + + if 'agent' in get_service(): + log_files = [LOG_FILE_PATH] + else: + log_files = [LOG_FILE_PATH, ALERT_FILE_PATH] for log_file in log_files: - truncate_file(log_file) + if os.path.isfile(os.path.join(PREFIX, log_file)): + truncate_file(log_file) yield for log_file in log_files: - truncate_file(log_file) + if os.path.isfile(os.path.join(PREFIX, log_file)): + truncate_file(log_file) -@pytest.fixture(scope='function') +@pytest.fixture() def stop_modules_function_after_execution(): """Stop wazuh modules daemon after finishing a test""" yield diff --git a/tests/integration/test_analysisd/test_predecoder_stage/data/syslog_socket_input.yaml b/tests/integration/test_analysisd/test_predecoder_stage/data/syslog_socket_input.yaml index 761b859149..b44ba13608 100644 --- a/tests/integration/test_analysisd/test_predecoder_stage/data/syslog_socket_input.yaml +++ b/tests/integration/test_analysisd/test_predecoder_stage/data/syslog_socket_input.yaml @@ -1,71 +1,122 @@ ---- -- - name: "Syslog date format 1" - description: "Check valid input" +- name: Syslog date format 1 + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "Dec 29 10:00:01 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"Dec 29 10:00:01","hostname":"linux-agent"}' -- - name: "Syslog date format 2" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, + "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", + "log_format": "syslog", "event": "Dec 29 10:00:01 linux-agent sshd[29205]: Invalid user blimey from + 18.18.18.18 port 48928", "token": "21218e6b"}} + output: >- + {"program_name":"sshd","timestamp":"Dec 29 + 10:00:01","hostname":"linux-agent"} + +- name: Syslog date format 2 + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "2015 Dec 29 10:00:01 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"2015 Dec 29 10:00:01"}' -- - name: "Syslog date format for rsyslog" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "2015 Dec 29 10:00:01 linux-agent sshd[29205]: Invalid user blimey from + 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"2015 Dec 29 10:00:01"}' + +- name: Syslog date format for rsyslog + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "2009-05-22T09:36:46.214994-07:00 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"2009-05-22T09:36:46.214994-07:00"}' -- - name: "Syslog date format for proftpd 1.3.5" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "2009-05-22T09:36:46.214994-07:00 linux-agent sshd[29205]: Invalid user + blimey from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"2009-05-22T09:36:46.214994-07:00"}' + +- name: Syslog date format for proftpd 1.3.5 + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "2015-04-16 21:51:02,805 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"2015-04-16 21:51:02,80"}' -- - name: "Syslog date format for xferlog date format" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "2015-04-16 21:51:02,805 linux-agent sshd[29205]: Invalid user blimey + from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"2015-04-16 21:51:02,805"}' + +- name: Syslog date format for xferlog date format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "Mon Apr 17 18:27:14 2006 1 64.160.42.130 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"timestamp":"Mon Apr 17 18:27:14 2006"}' -- - name: "Syslog date format for snort date format" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "Mon Apr 17 18:27:14 2006 1 64.160.42.130 linux-agent sshd[29205]: + Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"timestamp":"Mon Apr 17 18:27:14 2006"}' + +- name: Syslog date format for snort date format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "01/28-09:13:16.240702 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"timestamp":"01/28-09:13:16.240702"}' -- - name: "Syslog date format for suricata date format" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "01/28-09:13:16.240702 linux-agent sshd[29205]: Invalid user blimey from + 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"timestamp":"01/28-09:13:16.240702"}' + +- name: Syslog date format for suricata date format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "01/28/1979-09:13:16.240702 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"timestamp":"01/28/1979-09:13:16.240702"}' -- - name: "Syslog date format for apache log format" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "01/28/1979-09:13:16.240702 linux-agent sshd[29205]: Invalid user blimey + from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"timestamp":"01/28/1979-09:13:16.240702"}' + +- name: Syslog date format for apache log format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "[Fri Feb 11 18:06:35 2004] [warn] linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"timestamp":"Fri Feb 11 18:06:35 2004"}' -- - name: "Syslog date format for macos ULS --syslog output" - description: "Check valid input" + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "[Fri Feb 11 18:06:35 2004] [warn] linux-agent sshd[29205]: Invalid user + blimey from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"timestamp":"Fri Feb 11 18:06:35 2004"}' + +- name: Syslog date format for macos ULS --syslog output + description: Check valid input + test_case: + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "2021-04-21 10:16:09.404756-0700 linux-agent sshd[29205]: Invalid user + blimey from 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"2021-04-21 10:16:09.404756-0700"}' + +- name: Syslog Umlaut date format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "2021-04-21 10:16:09.404756-0700 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"2021-04-21 10:16:09.404756-0700"}' + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "Mär 02 17:30:52 linux-agent sshd[29205]: Invalid user blimey from + 18.18.18.18 port 48928", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"Mär 02 17:30:5"}' + - - name: "Syslog Umlaut date format" - description: "Check valid input" + name: Syslog syslog-ng OSE date format + description: Check valid input test_case: - - - input: '{"version": 1, "origin": {"name": "wazuh-logtest", "module": "wazuh-logtest"}, "command": "log_processing", "parameters": {"location":"master->/var/log/syslog", "log_format": "syslog", "event": "Mär 02 17:30:52 linux-agent sshd[29205]: Invalid user blimey from 18.18.18.18 port 48928", "token": "21218e6b"}}' - output: '{"program_name":"sshd","timestamp":"Mär 02 17:30:5"}' + - input: >- + {"version": 1, "origin": {"name": "wazuh-logtest", "module": + "wazuh-logtest"}, "command": "log_processing", "parameters": + {"location":"master->/var/log/syslog", "log_format": "syslog", "event": + "2022-12-20T15:02:53.123+00:00 localhost sshd[25474]: Accepted password for + rromero from 192.168.1.133 port 49765 ssh2", "token": "21218e6b"}} + output: '{"program_name":"sshd","timestamp":"2022-12-20T15:02:53.123+00:00"}' diff --git a/tests/integration/test_authd/data/configuration_template/config_authd_use_password_invalid.yaml b/tests/integration/test_authd/data/configuration_template/config_authd_use_password_invalid.yaml new file mode 100644 index 0000000000..92a8aaf333 --- /dev/null +++ b/tests/integration/test_authd/data/configuration_template/config_authd_use_password_invalid.yaml @@ -0,0 +1,29 @@ +- tags: + - authd + apply_to_modules: + - test_use_password_invalid + sections: + - section: auth + elements: + - disabled: + value: 'no' + - port: + value: 1515 + - use_source_ip: + value: 'no' + - purge: + value: 'yes' + - use_password: + value: USE_PASSWORD + - limit_maxagents: + value: 'yes' + - ciphers: + value: HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH + - ssl_verify_host: + value: 'no' + - ssl_manager_cert: + value: /var/ossec/etc/sslmanager.cert + - ssl_manager_key: + value: /var/ossec/etc/sslmanager.key + - ssl_auto_negotiate: + value: 'no' diff --git a/tests/integration/test_authd/data/test_cases/cases_authd_use_password_invalid.yaml b/tests/integration/test_authd/data/test_cases/cases_authd_use_password_invalid.yaml new file mode 100644 index 0000000000..b35fe765a0 --- /dev/null +++ b/tests/integration/test_authd/data/test_cases/cases_authd_use_password_invalid.yaml @@ -0,0 +1,19 @@ +- name: Use empty password file. + description: Set the use_password tag with the value 'yes', + create the file authd.pass and keep it empty + making authd impossible to start. + configuration_parameters: + USE_PASSWORD: 'yes' + metadata: + error: Empty password provided. + password: '' + +- name: Use only spaces password. + description: Set the use_password tag with the value 'yes', + create the file authd.pass and fill it with only + spaces making authd impossible to start. + configuration_parameters: + USE_PASSWORD: 'yes' + metadata: + error: Invalid password provided. + password: ' ' diff --git a/tests/integration/test_authd/force_options/data/test_cases/valid_config/enablement_force_options.yaml b/tests/integration/test_authd/force_options/data/test_cases/valid_config/enablement_force_options.yaml index fc246efb0f..137671a623 100644 --- a/tests/integration/test_authd/force_options/data/test_cases/valid_config/enablement_force_options.yaml +++ b/tests/integration/test_authd/force_options/data/test_cases/valid_config/enablement_force_options.yaml @@ -1,19 +1,19 @@ - - name: "force_options_enabled" - description: "Check that an agent can be replaced when force is enabled" + name: force_options_enabled + description: Check that an agent can be replaced when force is enabled configuration: - force: elements: - - enabled: - value: 'yes' - - key_mismatch: - value: 'no' - - disconnected_time: - attributes: - - enabled: 'no' - value: 0 - - after_registration_time: - value: 0 + - enabled: + value: 'yes' + - key_mismatch: + value: 'no' + - disconnected_time: + attributes: + - enabled: 'no' + value: 0 + - after_registration_time: + value: 0 pre_existent_agents: - id: '001' @@ -21,41 +21,41 @@ - id: '002' name: agent_dup_ip - ip: '2.2.2.2' + ip: 2.2.2.2 test_case: - - - description: Insert an agent with duplicated name - input: - name: 'agent_dup_name' - output: - status: 'success' - log: - - Duplicate name. Removing old agent 'agent_dup_name' (id '001'). - - - description: Insert an agent with duplicated ip - input: - name: 'agent_dup_ip_new' - ip: '2.2.2.2' - output: - status: 'success' - log: - - Duplicate IP '2.2.2.2'. Removing old agent 'agent_dup_ip' (id '002'). + - + description: Insert an agent with duplicated name + input: + name: agent_dup_name + output: + status: success + log: + - Duplicate name. Removing old agent 'agent_dup_name' (id '001'). + - + description: Insert an agent with duplicated ip + input: + name: agent_dup_ip_new + ip: 2.2.2.2 + output: + status: success + log: + - Duplicate IP '2.2.2.2'. Removing old agent 'agent_dup_ip' (id '002'). - - name: "force_options_disabled" - description: "Check that an agent can´t be replaced when force is disabled" + name: force_options_disabled + description: Check that an agent cannot be replaced when force is disabled configuration: - force: elements: - - enabled: - value: 'no' - - key_mismatch: - value: 'no' - - disconnected_time: - value: 0 - attributes: - - enabled: 'no' - - after_registration_time: - value: 0 + - enabled: + value: 'no' + - key_mismatch: + value: 'no' + - disconnected_time: + value: 0 + attributes: + - enabled: 'no' + - after_registration_time: + value: 0 pre_existent_agents: - id: '001' @@ -63,90 +63,89 @@ - id: '002' name: agent_dup_ip - ip: '2.2.2.2' + ip: 2.2.2.2 test_case: - - - description: Try to replace an agent with duplicated name - input: - name: 'agent_dup_name' - output: - status: 'error' - log: - - Duplicate name 'agent_dup_name', rejecting enrollment. Agent '001' won't be removed because the force option is disabled. - - - description: Try to replace an agent with duplicated ip - input: - name: 'agent_dup_name_new' - ip: '2.2.2.2' - output: - status: 'error' - log: - - Duplicate IP '2.2.2.2', rejecting enrollment. Agent '002' won't be removed because the force option is disabled. + - + description: Try to replace an agent with duplicated name + input: + name: agent_dup_name + output: + status: error + log: + - Duplicate name 'agent_dup_name', rejecting enrollment. Agent '001' won't be removed because the force option + - + description: Try to replace an agent with duplicated ip + input: + name: agent_dup_name_new + ip: 2.2.2.2 + output: + status: error + log: + - Duplicate IP '2.2.2.2', rejecting enrollment. Agent '002' won't be removed because the force option - - name: "force_insert_disabled" - description: "Check that legacy force_insert overrides force.enabled" + name: force_insert_disabled + description: Check that legacy force_insert overrides force.enabled configuration: - - force: - elements: - - enabled: - value: 'yes' - - key_mismatch: - value: 'no' - - disconnected_time: - value: 0 - attributes: - - enabled: 'no' - - after_registration_time: - value: 0 - - force_insert : - value: 'no' + - force: + elements: + - enabled: + value: 'no' + - key_mismatch: + value: 'no' + - disconnected_time: + value: 0 + attributes: + - enabled: 'no' + - after_registration_time: + value: 0 + - force_insert: + value: 'no' + - force_time: + value: 0 pre_existent_agents: - id: '001' name: agent_1 log: - - The tag is deprecated since version 4.3.0. - - Setting tag to 'no' to comply with the legacy option found. + - The tag is deprecated. Use instead. + - The tag is deprecated. Use instead. test_case: - - - description: Don´t replace an agent if force_insert disabled force options - input: - name: 'agent_1' - output: - status: 'error' - log: - - Duplicate name 'agent_1', rejecting enrollment. Agent '001' won't be removed because the force option is disabled. + - + description: Don´t replace an agent if force_insert disabled force options + input: + name: agent_1 + output: + status: error + log: + - Duplicate name 'agent_1', rejecting enrollment. Agent '001' won't be removed because the force option - - name: "force_insert_enabled" - description: "Check that legacy force_insert overrides force.enabled" + name: force_insert_enabled_no_force_block + description: Check that legacy force_insert overrides force.enabled configuration: - - force: - elements: - - enabled: - value: 'no' - - key_mismatch: - value: 'no' - - disconnected_time: - value: 0 - attributes: - - enabled: 'no' - - after_registration_time: - value: 0 - - force_insert : - value: 'yes' + - force_insert: + value: 'yes' + - force_time: + value: 5 pre_existent_agents: - name: agent_1 id: '001' + connection_status: never_connected + registration_time: + delta: -10000 + disconnected_time: + delta: -3 log: - - The tag is deprecated since version 4.3.0. + - The tag is deprecated. Use instead. + - The tag is deprecated. Use instead. - Setting tag to 'yes' to comply with the legacy option found. + - Setting tag to '5' to comply with the legacy option found. test_case: - - - description: Replace an agent if force_insert enabled force options - input: - name: 'agent_1' - output: - status: 'success' - log: - - Duplicate name. Removing old agent 'agent_1' (id '001') \ No newline at end of file + - + description: Replace an agent if force_insert enabled force options + input: + name: agent_1 + output: + status: success + log: + - Duplicate name. Removing old agent 'agent_1' (id '001') diff --git a/tests/integration/test_authd/test_authd_use_password_invalid.py b/tests/integration/test_authd/test_authd_use_password_invalid.py new file mode 100644 index 0000000000..d9761ab1c2 --- /dev/null +++ b/tests/integration/test_authd/test_authd_use_password_invalid.py @@ -0,0 +1,145 @@ +''' +copyright: Copyright (C) 2015-2023, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: These tests will check invalid values in the authd.pass (for now just checks 'empty') + raises the expected error logs. + +components: + - authd + +suite: use_password + +targets: + - manager + +daemons: + - wazuh-authd + +os_platform: + - linux + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + +tags: + - enrollment + - authd +''' + +import pytest + +import os + +from wazuh_testing.modules.authd import event_monitor as evm +from wazuh_testing import DEFAULT_AUTHD_PASS_PATH +from wazuh_testing.tools.file import write_file, delete_file +from wazuh_testing.tools.configuration import get_test_cases_data, load_configuration_template +from wazuh_testing.tools.services import control_service + + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Marks +pytestmark = [pytest.mark.server, pytest.mark.tier(level=1)] + +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_authd_use_password_invalid.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'config_authd_use_password_invalid.yaml') + +# Test configurations +params, metadata, case_ids = get_test_cases_data(test_cases_path) +configuration = load_configuration_template(configurations_path, params, metadata) +local_internal_options = {'authd.debug': '2'} + + +# Fixture +@pytest.fixture() +def set_authd_pass(metadata: dict): + """Configure the file 'authd.pass' as needed for the test.""" + # Write the content in the authd.pass file. + write_file(DEFAULT_AUTHD_PASS_PATH, metadata.get('password')) + + yield + + # Delete the file as by default it doesn't exist. + delete_file(DEFAULT_AUTHD_PASS_PATH) + + +# Test +@pytest.mark.parametrize('metadata, configuration', zip(metadata, configuration), ids=case_ids) +def test_authd_use_password_invalid(metadata, configuration, truncate_monitored_files, + configure_local_internal_options_module, set_authd_pass, + set_wazuh_configuration, tear_down): + ''' + description: + Checks the correct errors are raised when an invalid password value + is configured in the authd.pass file. This test expects the error log + to come from the cases yaml, this is done this way to handle easily + the different error logs that could be raised from different inputs. + + wazuh_min_version: + 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Configuration loaded from `configuration_template`. + - metadata: + type: dict + brief: Test case metadata. + - set_wazuh_configuration: + type: fixture + brief: Set wazuh configuration. + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. + - set_authd_pass: + type: fixture + brief: Configures the `authd.pass` file as needed. + - tear_down: + type: fixture + brief: Roll back the daemon and client.keys state after the test ends. + + assertions: + - Error log 'Empty password provided.' is raised in ossec.log. + - wazuh-manager.service must not be able to restart. + + input_description: + ./data/configuration_template/config_authd_use_password_invalid.yaml: Wazuh config needed for the tests. + ./data/test_cases/cases_authd_use_password_invalid.yaml: Values to be used and expected error. + + expected_output: + - .*Empty password provided. + - .*Invalid password provided. + ''' + if metadata.get('error') == 'Invalid password provided.': + pytest.xfail(reason="No password validation in authd.pass - Issue wazuh/wazuh#16282.") + + # Verify wazuh-manager fails at restart. + with pytest.raises(ValueError): + control_service('restart') + + # Verify the error log is raised. + evm.check_authd_event(callback=metadata.get('error')) diff --git a/tests/integration/test_fim/test_files/test_report_changes/common.py b/tests/integration/test_fim/common.py similarity index 90% rename from tests/integration/test_fim/test_files/test_report_changes/common.py rename to tests/integration/test_fim/common.py index 51e644fb4a..2231084c66 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/common.py +++ b/tests/integration/test_fim/common.py @@ -3,10 +3,11 @@ # This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 import os -import re import sys +import hashlib -from wazuh_testing.fim import WAZUH_PATH +from wazuh_testing.tools import PREFIX +from wazuh_testing import WAZUH_PATH def generate_string(stringLength=10, character='0'): @@ -167,13 +168,12 @@ def make_diff_file_path(folder='/testdir1', filename='regular_0'): diff_file_path : str Path to compressed file. """ - diff_file_path = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - if sys.platform == 'win32': - folder_components = re.match(r'^([a-zA-Z]):\\{1,2}(\w+)\\{0,2}$', folder) - diff_file_path = os.path.join(diff_file_path, folder_components.group(1).lower(), - folder_components.group(2).lower(), filename, 'last-entry.gz') - else: - diff_file_path = os.path.join(diff_file_path, folder.strip('/'), filename, 'last-entry.gz') + file_path = os.path.join(PREFIX, folder, filename) + sha_1 = hashlib.sha1() + sha_1.update(file_path.encode('utf-8')) + file_sha1 = sha_1.hexdigest() + + diff_file_path = os.path.join(WAZUH_PATH, 'queue', 'diff', 'file', file_sha1, 'last-entry.gz') return diff_file_path diff --git a/tests/integration/test_fim/conftest.py b/tests/integration/test_fim/conftest.py index 7c821808aa..16a8f4c36d 100644 --- a/tests/integration/test_fim/conftest.py +++ b/tests/integration/test_fim/conftest.py @@ -1,23 +1,30 @@ -# Copyright (C) 2015-2021, Wazuh Inc. +# Copyright (C) 2015-2022, Wazuh Inc. # Created by Wazuh, Inc. . # This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - +import os +import re +import subprocess +import time import pytest -from wazuh_testing import global_parameters + +from distro import id +from wazuh_testing import (global_parameters, LOG_FILE_PATH, REGULAR, WAZUH_SERVICES_START, WAZUH_SERVICES_STOP, + WAZUH_LOG_MONITOR) from wazuh_testing.tools.services import control_service -from wazuh_testing.fim import (create_registry, registry_parser, KEY_WOW64_64KEY, delete_registry, - LOG_FILE_PATH, callback_detect_registry_integrity_clear_event) -from wazuh_testing.tools.file import truncate_file -from wazuh_testing.fim_module.fim_variables import WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, SYNC_INTERVAL_VALUE -from wazuh_testing.wazuh_variables import WAZUH_SERVICES_START, WAZUH_SERVICES_STOP, WAZUH_LOG_MONITOR from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.file import truncate_file, delete_path_recursively, create_file +from wazuh_testing.tools.local_actions import run_local_command_returning_output +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, SYNC_INTERVAL_VALUE, KEY_WOW64_64KEY, + MONITORED_DIR_1, registry_parser) +from wazuh_testing.modules.fim import event_monitor as evm +from wazuh_testing.modules.fim.utils import create_registry, delete_registry -@pytest.fixture(scope='function') +@pytest.fixture() def create_key(request): - """Fixture that create the test key And then delete the key and truncate the file. The aim of this - fixture is to avoid false positives if the manager still has the test key - in it's DB. + """ + Fixture that create the test key And then delete the key and truncate the file. The aim of this fixture is to avoid + false positives if the manager still has the test key in it's DB. """ control_service(WAZUH_SERVICES_STOP) create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, KEY_WOW64_64KEY) @@ -32,5 +39,132 @@ def create_key(request): # wait until the sync is done. file_monitor.start(timeout=SYNC_INTERVAL_VALUE + global_parameters.default_timeout, - callback=callback_detect_registry_integrity_clear_event, + callback=evm.callback_detect_registry_integrity_clear_event, error_message='Did not receive expected "integrity clear" event') + + +@pytest.fixture() +def create_files_in_folder(files_number): + """Create files in monitored folder and files""" + + for file in range(0, files_number): + create_file(REGULAR, MONITORED_DIR_1, f"test_file_{time.time()}_{file}") + + yield + + delete_path_recursively(MONITORED_DIR_1) + + +@pytest.fixture(scope='module') +def install_audit(get_configuration): + """Install auditd before test""" + + # Check distro + linux_distro = id() + + if re.match(linux_distro, "centos"): + package_management = "yum" + audit = "audit" + option = "--assumeyes" + elif re.match(linux_distro, "ubuntu") or re.match(linux_distro, "debian"): + package_management = "apt-get" + audit = "auditd" + option = "--yes" + else: + # Install audit and start the service + process = subprocess.run([package_management, "install", audit, option], check=True) + process = subprocess.run(["service", "auditd", "start"], check=True) + + +@pytest.fixture() +def wait_fim_start(configuration): + """ Wait for realtime start, whodata start or end of initial FIM scan. + + Args: + configuration (dict): Configuration template data to write in the ossec.conf. + """ + file_monitor = FileMonitor(LOG_FILE_PATH) + mode_key = 'fim_mode' if 'fim_mode2' not in configuration else 'fim_mode2' + + try: + if configuration[mode_key] == 'realtime': + evm.detect_realtime_start(file_monitor) + elif configuration[mode_key] == 'whodata': + evm.detect_whodata_start(file_monitor) + else: # scheduled + evm.detect_initial_scan(file_monitor) + except KeyError: + evm.detect_initial_scan(file_monitor) + + +@pytest.fixture() +def wait_syscheck_start(metadata): + """ Wait for realtime start, whodata start or end of initial FIM scan. + Args: + metadata (dict): Test additional metadata + """ + file_monitor = FileMonitor(LOG_FILE_PATH) + mode_key = 'fim_mode' if 'fim_mode2' not in metadata else 'fim_mode2' + + try: + if metadata[mode_key] == 'realtime': + evm.detect_realtime_start(file_monitor) + elif metadata[mode_key] == 'whodata': + evm.detect_whodata_start(file_monitor) + else: # scheduled + evm.detect_initial_scan(file_monitor) + except KeyError: + evm.detect_initial_scan(file_monitor) + + +@pytest.fixture() +def restart_syscheck_function(): + """ + Restart syscheckd daemon. + """ + control_service("stop", daemon="wazuh-syscheckd") + truncate_file(LOG_FILE_PATH) + control_service("start", daemon="wazuh-syscheckd") + + +@pytest.fixture() +def create_monitored_folders(test_folders): + """ + Create the folders that will be monitored and delete them at the end. + + Args: + test_folders(list): List of folders to create and delete + """ + for folder in test_folders: + os.mkdir(folder, mode=0o0777) + + yield + + for folder in test_folders: + delete_path_recursively(folder) + + +@pytest.fixture(scope='module') +def create_monitored_folders_module(test_folders): + """ + Create the folders that will be monitored and delete them at the end. + + Args: + test_folders(list): List of folders to create and delete + """ + for folder in test_folders: + os.mkdir(folder, mode=0o0777) + + yield + + for folder in test_folders: + delete_path_recursively(folder) + + +@pytest.fixture() +def restore_win_whodata_policies(policies_file): + + yield + + command = f"auditpol /restore /file:{policies_file}" + run_local_command_returning_output(command) diff --git a/tests/integration/test_fim/test_files/conftest.py b/tests/integration/test_fim/test_files/conftest.py index 2dec425614..c6ff8eaefd 100644 --- a/tests/integration/test_fim/test_files/conftest.py +++ b/tests/integration/test_fim/test_files/conftest.py @@ -4,11 +4,12 @@ import pytest -from wazuh_testing.fim import (LOG_FILE_PATH, detect_initial_scan, detect_realtime_start, detect_whodata_start, - detect_initial_scan_start) +from wazuh_testing import LOG_FILE_PATH from wazuh_testing.tools.file import truncate_file from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.services import control_service +from wazuh_testing.modules.fim.event_monitor import (detect_initial_scan, detect_realtime_start, detect_whodata_start, + detect_initial_scan_start) @pytest.fixture(scope="module") @@ -23,7 +24,7 @@ def restart_syscheckd(get_configuration, request): control_service("start", daemon="wazuh-syscheckd") -@pytest.fixture(scope="function") +@pytest.fixture() def restart_syscheckd_function(get_configuration, request): """ Restart syscheckd daemon. @@ -38,20 +39,20 @@ def restart_syscheckd_function(get_configuration, request): @pytest.fixture(scope="module") def wait_for_fim_start(get_configuration, request): """ - Wait for fim to start + Wait for fim to start """ wait_for_fim_active(get_configuration, request) -@pytest.fixture(scope="function") +@pytest.fixture() def wait_for_fim_start_function(get_configuration, request): """ Wait for fim to start """ - wait_for_fim_start(get_configuration, request) + wait_for_fim_active(get_configuration, request) -@pytest.fixture(scope="function") +@pytest.fixture() def wait_for_scan_start(get_configuration, request): """ Wait for start of initial FIM scan. diff --git a/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict.yaml b/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict.yaml index 2de1a7afee..1f503cdc38 100644 --- a/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict.yaml +++ b/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict.yaml @@ -1,54 +1,53 @@ ---- # Conf 1 - tags: - - valid_no_regex + - valid_no_regex apply_to_modules: - - test_ignore_works_over_restrict + - test_ignore_works_over_restrict sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 2 - - directories: - value: "/testdir1" - attributes: - - check_all: 'yes' - - restrict: "testfile$" - - FIM_MODE - - directories: - value: "/testdir2" - attributes: - - check_all: 'yes' - - FIM_MODE - - ignore: - value: "/testdir1/testfile" + - section: syscheck + elements: + - disabled: + value: "no" + - frequency: + value: 1 + - directories: + value: /testdir1 + attributes: + - check_all: "yes" + - restrict: testfile$ + - FIM_MODE + - directories: + value: /testdir2 + attributes: + - check_all: "yes" + - FIM_MODE + - ignore: + value: /testdir1/testfile # Conf 2 - tags: - - valid_regex + - valid_regex apply_to_modules: - - test_ignore_works_over_restrict + - test_ignore_works_over_restrict sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 2 - - directories: - value: "/testdir1" - attributes: - - check_all: 'yes' - - restrict: "testfile2$" - - FIM_MODE - - directories: - value: "/testdir2" - attributes: - - check_all: 'yes' - - restrict: "not_ignored_sregex$" - - FIM_MODE - - ignore: - value: "testfile2$" - attributes: - - type: sregex + - section: syscheck + elements: + - disabled: + value: "no" + - frequency: + value: 1 + - directories: + value: /testdir1 + attributes: + - check_all: "yes" + - restrict: testfile2$ + - FIM_MODE + - directories: + value: /testdir2 + attributes: + - check_all: "yes" + - restrict: not_ignored_sregex$ + - FIM_MODE + - ignore: + value: testfile2$ + attributes: + - type: sregex diff --git a/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict_win32.yaml b/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict_win32.yaml index 91edff46fd..1bbf7316a6 100644 --- a/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict_win32.yaml +++ b/tests/integration/test_fim/test_files/test_ambiguous_confs/data/wazuh_conf_ignore_restrict_win32.yaml @@ -1,83 +1,81 @@ ---- # Conf 1 - tags: - - valid_no_regex + - valid_no_regex apply_to_modules: - - test_ignore_works_over_restrict + - test_ignore_works_over_restrict sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 2 - - directories: - value: "c:\\testdir1" - attributes: - - check_all: 'yes' - - restrict: "testfile$" - - FIM_MODE - - directories: - value: "c:\\testdir2" - attributes: - - check_all: 'yes' - - FIM_MODE - - ignore: - value: "c:\\testdir1\\testfile" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: "no" + - frequency: + value: 1 + - directories: + value: c:\\testdir1 + attributes: + - check_all: "yes" + - restrict: testfile$ + - FIM_MODE + - directories: + value: c:\\testdir2 + attributes: + - check_all: "yes" + - FIM_MODE + - ignore: + value: c:\\testdir1\\testfile + - section: sca + elements: + - enabled: + value: "no" + - section: rootcheck + elements: + - disabled: + value: "yes" + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: "yes" # Conf 2 - tags: - - valid_regex + - valid_regex apply_to_modules: - - test_ignore_works_over_restrict + - test_ignore_works_over_restrict sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 2 - - directories: - value: "c:\\testdir1" - attributes: - - check_all: 'yes' - - restrict: "testfile2$" - - FIM_MODE - - directories: - value: "c:\\testdir2" - attributes: - - check_all: 'yes' - - restrict: "not_ignored_sregex$" - - FIM_MODE - - ignore: - value: "testfile2$" - attributes: - - type: sregex - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' - + - section: syscheck + elements: + - disabled: + value: "no" + - frequency: + value: 1 + - directories: + value: c:\\testdir1 + attributes: + - check_all: "yes" + - restrict: testfile2$ + - FIM_MODE + - directories: + value: c:\\testdir2 + attributes: + - check_all: "yes" + - restrict: not_ignored_sregex$ + - FIM_MODE + - ignore: + value: testfile2$ + attributes: + - type: sregex + - section: sca + elements: + - enabled: + value: "no" + - section: rootcheck + elements: + - disabled: + value: "yes" + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: "yes" diff --git a/tests/integration/test_fim/test_files/test_ambiguous_confs/test_ignore_works_over_restrict.py b/tests/integration/test_fim/test_files/test_ambiguous_confs/test_ignore_works_over_restrict.py index d06f316598..841ccf842c 100644 --- a/tests/integration/test_fim/test_files/test_ambiguous_confs/test_ignore_works_over_restrict.py +++ b/tests/integration/test_fim/test_files/test_ambiguous_confs/test_ignore_works_over_restrict.py @@ -64,13 +64,12 @@ import sys import pytest -from wazuh_testing import global_parameters from wazuh_testing import logger -from wazuh_testing.fim import LOG_FILE_PATH, callback_ignore, callback_detect_event, create_file, REGULAR, \ - generate_params, check_time_travel +from wazuh_testing.fim import LOG_FILE_PATH, callback_detect_event, create_file, REGULAR, generate_params +from wazuh_testing.modules.fim.event_monitor import CB_IGNORING_DUE_TO_SREGEX, CB_IGNORING_DUE_TO_PATTERN from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback # Marks @@ -79,9 +78,9 @@ # Variables test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') -configurations_path = os.path.join( - test_data_path, 'wazuh_conf_ignore_restrict_win32.yaml' - if sys.platform == 'win32' else 'wazuh_conf_ignore_restrict.yaml') +conf_path = os.path.join(test_data_path, + 'wazuh_conf_ignore_restrict_win32.yaml'if sys.platform == 'win32' + else 'wazuh_conf_ignore_restrict.yaml') test_directories = [os.path.join(PREFIX, 'testdir1'), os.path.join(PREFIX, 'testdir2')] testdir1, testdir2 = test_directories @@ -89,20 +88,20 @@ # Configurations -conf_params, conf_metadata = generate_params() -configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) - +params, metadata = generate_params() +configurations = load_wazuh_configurations(conf_path, __name__, params=params, metadata=metadata) # Fixtures + @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" return request.param - # Tests + @pytest.mark.parametrize('folder, filename, triggers_event, tags_to_apply', [ (testdir1, 'testfile', False, {'valid_no_regex'}), (testdir2, 'not_ignored_string', True, {'valid_no_regex'}), @@ -160,24 +159,24 @@ def test_ignore_works_over_restrict(folder, filename, triggers_event, tags_to_ap expected_output: - r'.*Sending FIM event: (.+)$' (When the FIM event should be generated) - - r".*Ignoring '.*?' '(.*?)' due to( sregex)? '.*?'" (When the FIM event should be ignored) + - r".*Ignoring '.*?' '(.*?)' due to (sregex|pattern)? '.*?'" (When the FIM event should be ignored) tags: - scheduled - - time_travel ''' logger.info('Applying the test configuration') check_apply_test(tags_to_apply, get_configuration['tags']) - scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' # Create file that must be ignored logger.info(f'Adding file {os.path.join(testdir1, filename)}, content: ""') create_file(REGULAR, folder, filename, content='') + # Waiting time for the new scan to be generated. + timeout = 5 # seconds + logger.info(f'Waiting up to {timeout} seconds for the new scan to be detected.') if triggers_event: - logger.info('Checking the event...') - event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + event = wazuh_log_monitor.start(timeout=timeout, callback=callback_detect_event, error_message=f'Did not receive expected "Sending FIM event" ' f'event for file {os.path.join(testdir1, filename)}').result() @@ -185,11 +184,12 @@ def test_ignore_works_over_restrict(folder, filename, triggers_event, tags_to_ap assert event['data']['type'] == 'added', 'Event type not equal' assert event['data']['path'] == os.path.join(folder, filename), 'Event path not equal' else: - while True: - ignored_file = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_ignore, - error_message=f'Did not receive expected ' - f'"Ignoring ... due to ..." event for file ' - f'{os.path.join(testdir1, filename)}').result() - - if ignored_file == os.path.join(folder, filename): - break + regex = CB_IGNORING_DUE_TO_PATTERN if 'valid_no_regex' in tags_to_apply else CB_IGNORING_DUE_TO_SREGEX + matching_log = wazuh_log_monitor.start(timeout=timeout, + accum_results=2, + callback=generate_monitoring_callback(regex), + error_message=f'Did not receive expected ' + f'"Ignoring ... due to ..." event for file ' + f'{os.path.join(testdir1, filename)}').result() + + assert os.path.join(folder, filename) in matching_log, "Ignored file log is not generated." diff --git a/tests/integration/test_fim/test_files/test_audit/data/configuration_template/configuration_remove_audit.yaml b/tests/integration/test_fim/test_files/test_audit/data/configuration_template/configuration_remove_audit.yaml new file mode 100644 index 0000000000..f97bab03de --- /dev/null +++ b/tests/integration/test_fim/test_files/test_audit/data/configuration_template/configuration_remove_audit.yaml @@ -0,0 +1,26 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: /testdir1 + attributes: + - whodata: WHODATA + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_audit/data/test_cases/cases_remove_audit.yaml b/tests/integration/test_fim/test_files/test_audit/data/test_cases/cases_remove_audit.yaml new file mode 100644 index 0000000000..0465303b2f --- /dev/null +++ b/tests/integration/test_fim/test_files/test_audit/data/test_cases/cases_remove_audit.yaml @@ -0,0 +1,6 @@ +- name: Remove auditd - Whodata cannot start + description: When Auditd is removed, whodata thread cannot start and monitoring is done in realtime + configuration_parameters: + WHODATA: 'yes' + metadata: + fim_mode: whodata diff --git a/tests/integration/test_fim/test_files/test_audit/data/wazuh_conf.yaml b/tests/integration/test_fim/test_files/test_audit/data/wazuh_conf.yaml deleted file mode 100644 index ccbe6bd4d2..0000000000 --- a/tests/integration/test_fim/test_files/test_audit/data/wazuh_conf.yaml +++ /dev/null @@ -1,15 +0,0 @@ ---- -# conf 1 -- tags: - - config1 - apply_to_modules: - - test_remove_audit - sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: "/testdir1,/testdir2,/testdir3" - attributes: - - whodata: 'yes' diff --git a/tests/integration/test_fim/test_files/test_audit/test_remove_audit.py b/tests/integration/test_fim/test_files/test_audit/test_remove_audit.py index 98e869c9bb..05f622f775 100644 --- a/tests/integration/test_fim/test_files/test_audit/test_remove_audit.py +++ b/tests/integration/test_fim/test_files/test_audit/test_remove_audit.py @@ -1,5 +1,5 @@ ''' -copyright: Copyright (C) 2015-2022, Wazuh Inc. +copyright: Copyright (C) 2015-2023, Wazuh Inc. Created by Wazuh, Inc. . @@ -65,32 +65,38 @@ import subprocess import pytest -import wazuh_testing.fim as fim from distro import id -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test + +from wazuh_testing.tools import PREFIX, LOG_FILE_PATH +from wazuh_testing.tools.configuration import load_configuration_template, get_test_cases_data from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.utils import retry +from wazuh_testing.modules.fim import TEST_DIR_1 +from wazuh_testing.modules.fim.event_monitor import callback_audit_cannot_start +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options -# Marks +# Marks pytestmark = [pytest.mark.linux, pytest.mark.tier(level=1)] -# Variables - -test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') -configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') -test_directories = [os.path.join('/', 'testdir1'), os.path.join('/', 'testdir2'), os.path.join('/', 'testdir3')] -testdir1, testdir2, testdir3 = test_directories +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') -wazuh_log_monitor = FileMonitor(fim.LOG_FILE_PATH) +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_remove_audit.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_remove_audit.yaml') -# Configurations +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = get_test_cases_data(test_cases_path) +configurations = load_configuration_template(configurations_path, configuration_parameters, configuration_metadata) -configurations = load_wazuh_configurations(configurations_path, __name__) +# Variables +test_directories = [os.path.join(PREFIX, TEST_DIR_1)] # Function - @retry(subprocess.CalledProcessError, attempts=5, delay=10) def run_process(command_list): """Execute the command_list command @@ -105,13 +111,6 @@ def run_process(command_list): # Fixtures - -@pytest.fixture(scope='module', params=configurations) -def get_configuration(request): - """Get configurations from the module.""" - return request.param - - @pytest.fixture(scope='module') def uninstall_install_audit(): """Uninstall auditd before test and install after test""" @@ -141,12 +140,11 @@ def uninstall_install_audit(): # Test - -@pytest.mark.parametrize('tags_to_apply', [ - {'config1'} -]) -def test_move_folders_to_realtime(tags_to_apply, get_configuration, uninstall_install_audit, - configure_environment, restart_syscheckd): +@pytest.mark.parametrize('test_folders', [test_directories], scope="module", ids='') +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +def test_move_folders_to_realtime(configuration, metadata, set_wazuh_configuration, create_monitored_folders, + configure_local_internal_options_function, uninstall_install_audit, + restart_syscheck_function): ''' description: Check if FIM switches the monitoring mode of the testing directories from 'who-data' to 'realtime' when the 'auditd' package is not installed. For this purpose, the test @@ -155,32 +153,52 @@ def test_move_folders_to_realtime(tags_to_apply, get_configuration, uninstall_in are monitored with 'realtime' verifying that the proper FIM events are generated. Finally, the test will install the 'auditd' package again. + + test_phases: + - setup: + - Apply ossec.conf configuration changes according to the configuration template and use case. + - Apply custom settings in local_internal_options.conf. + - Remove auditd + - Truncate wazuh logs. + - Restart wazuh-syscheck daemon to apply configuration changes. + - test: + - Check that whodata cannot start and monitoring of configured folder is changed to realtime mode. + - teardown: + - Install auditd + - Restore initial configuration, both ossec.conf and local_internal_options.conf. + wazuh_min_version: 4.2.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if match with a configuration identifier, skip otherwise. - - get_configuration: + - configuration: + type: dict + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - set_wazuh_configuration: type: fixture - brief: Get configurations from the module. + brief: Set ossec.conf configuration. + - create_monitored_folders_module + type: fixture + brief: Create folders to be monitored, delete after test. + - configure_local_internal_options_function: + type: fixture + brief: Set local_internal_options.conf file. - uninstall_install_audit: type: fixture brief: Uninstall 'auditd' before the test and install it again after the test run. - - configure_environment: + - restart_syscheck_function: type: fixture - brief: Configure a custom environment for testing. - - restart_syscheckd: - type: fixture - brief: Clear the 'ossec.log' file and start a new monitor. + brief: restart syscheckd daemon, and truncate the ossec.log. assertions: - Verify that FIM switches the monitoring mode of the testing directories from 'whodata' to 'realtime' if the 'authd' package is not installed. - input_description: A test case (config1) is contained in external YAML file (wazuh_conf.yaml) + input_description: A test case is contained in external YAML file (configuration_remove_audit.yaml) which includes configuration settings for the 'wazuh-syscheckd' daemon and, it is combined with the testing directories to be monitored defined in this module. @@ -192,8 +210,7 @@ def test_move_folders_to_realtime(tags_to_apply, get_configuration, uninstall_in - realtime - who_data ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - - wazuh_log_monitor.start(timeout=20, callback=fim.callback_audit_cannot_start, + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + wazuh_log_monitor.start(timeout=20, callback=callback_audit_cannot_start, error_message='Did not receive expected "Who-data engine could not start. ' 'Switching who-data to real-time" event') diff --git a/tests/integration/test_fim/test_files/test_basic_usage/test_basic_usage_entries_match_path_count.py b/tests/integration/test_fim/test_files/test_basic_usage/test_basic_usage_entries_match_path_count.py index 5fa47ba094..62412bfc69 100644 --- a/tests/integration/test_fim/test_files/test_basic_usage/test_basic_usage_entries_match_path_count.py +++ b/tests/integration/test_fim/test_files/test_basic_usage/test_basic_usage_entries_match_path_count.py @@ -75,7 +75,7 @@ from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test from wazuh_testing.tools.monitoring import FileMonitor -from wazuh_testing.fim_module.event_monitor import callback_entries_path_count +from wazuh_testing.modules.fim.event_monitor import callback_entries_path_count # Marks diff --git a/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_dir.yaml b/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_dir.yaml index 381f387984..2049d01f86 100644 --- a/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_dir.yaml +++ b/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_dir.yaml @@ -1,28 +1,29 @@ # conf 1 - tags: - - ossec_conf + - ossec_conf apply_to_modules: - - MODULE_NAME + - test_dir + - test_dir_win32 sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_ENV_VARIABLES - attributes: - - FIM_MODE - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_ENV_VARIABLES + attributes: + - FIM_MODE + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_nodiff.yaml b/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_nodiff.yaml index 3aa952bd66..8055dec9f7 100644 --- a/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_nodiff.yaml +++ b/tests/integration/test_fim/test_files/test_env_variables/data/wazuh_conf_nodiff.yaml @@ -1,31 +1,33 @@ - tags: - - ossec_conf + - ossec_conf apply_to_modules: - - MODULE_NAME + - test_nodiff sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - nodiff: - value: TEST_ENV_VARIABLES - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - report_changes: 'yes' - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 3 + - nodiff: + value: TEST_ENV_VARIABLES + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - report_changes: 'yes' + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_env_variables/test_nodiff.py b/tests/integration/test_fim/test_files/test_env_variables/test_nodiff.py index f58dd223b0..3f84e3226d 100644 --- a/tests/integration/test_fim/test_files/test_env_variables/test_nodiff.py +++ b/tests/integration/test_fim/test_files/test_env_variables/test_nodiff.py @@ -66,8 +66,9 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, regular_file_cud, WAZUH_PATH, generate_params +from test_fim.common import make_diff_file_path +from wazuh_testing import global_parameters, LOG_FILE_PATH +from wazuh_testing.modules.fim.utils import regular_file_cud, generate_params from wazuh_testing.tools.configuration import load_wazuh_configurations, PREFIX from wazuh_testing.tools.monitoring import FileMonitor @@ -187,13 +188,7 @@ def test_tag_nodiff(directory, filename, hidden_content, get_configuration, put_ def report_changes_validator(event): """Validate content_changes attribute exists in the event""" for file in files: - diff_file = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - - if sys.platform == 'win32': - diff_file = os.path.join(diff_file, 'c') - - striped = directory.strip(os.sep) if sys.platform == 'darwin' else directory.strip(PREFIX) - diff_file = os.path.join(diff_file, striped, file) + diff_file = make_diff_file_path(directory, file) assert os.path.exists(diff_file), f'{diff_file} does not exist' assert event['data'].get('content_changes') is not None, 'content_changes is empty' @@ -208,6 +203,5 @@ def no_diff_validator(event): 'content_changes is truncated' regular_file_cud(directory, wazuh_log_monitor, file_list=files, - time_travel=get_configuration['metadata']['fim_mode'] == 'scheduled', - min_timeout=global_parameters.default_timeout, triggers_event=True, + min_timeout=global_parameters.default_timeout*2, triggers_event=True, validators_after_update=[report_changes_validator, no_diff_validator]) diff --git a/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf.yaml b/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf.yaml index ccdf0e0a48..e79be5bb4f 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf.yaml +++ b/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf.yaml @@ -1,98 +1,96 @@ ---- # conf 1 - tags: - - no_file_limit + - no_file_limit apply_to_modules: - - test_file_limit_no_limit + - test_file_limit_no_limit sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - file_limit: - elements: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - file_limit: + elements: + - enabled: + value: 'no' + - entries: + value: 10 + - section: sca + elements: - enabled: value: 'no' - - entries: - value: '10' - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' # conf 2 - tags: - - file_limit_default + - file_limit_default apply_to_modules: - - test_file_limit_default + - test_file_limit_default sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' # conf 3 - tags: - - file_limit_conf + - file_limit_conf apply_to_modules: - - test_file_limit_full - - test_file_limit_values - - test_file_limit_capacity_alerts + - test_file_limit_full + - test_file_limit_values + - test_file_limit_capacity_alerts sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 10 - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - file_limit: - elements: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 10 + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - file_limit: + elements: + - enabled: + value: 'yes' + - entries: + value: FILE_LIMIT + - section: sca + elements: - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: value: 'yes' - - entries: - value: FILE_LIMIT - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' - diff --git a/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf_delete_full.yaml b/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf_delete_full.yaml index 2d617d5803..3b4b0346b0 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf_delete_full.yaml +++ b/tests/integration/test_fim/test_files/test_file_limit/data/wazuh_conf_delete_full.yaml @@ -1,34 +1,31 @@ ---- -# conf 1 - tags: - - tags_delete_full + - tags_delete_full apply_to_modules: - - test_file_limit_delete_full + - test_file_limit_delete_full sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - file_limit: - elements: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - file_limit: + elements: + - enabled: + value: 'yes' + - entries: + value: LIMIT + - section: sca + elements: - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: value: 'yes' - - entries: - value: LIMIT - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' - diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_capacity_alerts.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_capacity_alerts.py index 24cc918594..673d82460f 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_capacity_alerts.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_capacity_alerts.py @@ -51,6 +51,7 @@ pytest_args: - fim_mode: + scheduled: Monitoring is done after a configured interval realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. whodata: Implies real-time monitoring but adding the 'who-data' information. - tier: @@ -65,23 +66,26 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, generate_params, create_file, REGULAR, delete_file, wait_for_scheduled_scan) + +from wazuh_testing import LOG_FILE_PATH, REGULAR, global_parameters from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file, delete_file from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import (CB_FILE_LIMIT_CAPACITY, ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT, - ERR_MSG_WRONG_CAPACITY_LOG_DB_LIMIT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, ERR_MSG_WRONG_INODE_PATH_COUNT, - CB_FILE_LIMIT_BACK_TO_NORMAL, ERR_MSG_DB_BACK_TO_NORMAL, ERR_MSG_FIM_INODE_ENTRIES) -from wazuh_testing.fim_module.event_monitor import callback_entries_path_count +from wazuh_testing.modules import TIER1 +from wazuh_testing.modules.fim import SCHEDULED_MODE, FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS +from wazuh_testing.modules.fim.event_monitor import (callback_entries_path_count, CB_FILE_LIMIT_CAPACITY, + ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT, ERR_MSG_FIM_INODE_ENTRIES, + ERR_MSG_WRONG_CAPACITY_LOG_DB_LIMIT, + ERR_MSG_WRONG_NUMBER_OF_ENTRIES, ERR_MSG_WRONG_INODE_PATH_COUNT) +from wazuh_testing.modules.fim.utils import generate_params, wait_for_scheduled_scan # Marks - -pytestmark = [pytest.mark.tier(level=1)] +pytestmark = [TIER1] # Variables +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS test_directories = [os.path.join(PREFIX, 'testdir1')] - directory_str = ','.join(test_directories) wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') @@ -95,8 +99,9 @@ file_limit_list = ['100'] conf_params = {'TEST_DIRECTORIES': testdir1} -params, metadata = generate_params(extra_params=conf_params, modes=['scheduled'], - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list)) +params, metadata = generate_params(extra_params=conf_params, modes=[SCHEDULED_MODE], + apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem + in file_limit_list)) configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) @@ -113,9 +118,9 @@ def get_configuration(request): # Tests -@pytest.mark.parametrize('percentage', [(80), (90), (0)]) -def test_file_limit_capacity_alert(percentage, get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): +@pytest.mark.parametrize('percentage', [(0), (80), (90)]) +def test_file_limit_capacity_alert(percentage, configure_local_internal_options_module, get_configuration, + configure_environment, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon generates events for different capacity thresholds limits when using the 'schedule' monitoring mode. For this purpose, the test will monitor a directory in which @@ -124,7 +129,7 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi the total and when the number is less than that percentage. Finally, the test will verify that on the FIM event, inodes and monitored files number match. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -132,6 +137,9 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi - percentage: type: int brief: Percentage of testing files to be created. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -156,7 +164,7 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi expected_output: - r'.*Sending FIM event: (.+)$' ('added' event if the testing directory is not ignored) - - r'.*Sending DB * full alert.' + - r'.*File database is * full.' - r'.*Sending DB back to normal alert.' - r'.*Fim inode entries*, path count' - r'.*Fim entries' (on Windows systems) @@ -169,28 +177,26 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi if percentage == 0: NUM_FILES = 0 - # Create files up to desired database percentage to generate alerts + + # Create files up to desired database percentage to generate alerts if percentage >= 80: # Percentages 80 and 90 for i in range(NUM_FILES): create_file(REGULAR, testdir1, f'test{i}') - #Delete files to empty DB and return it to normal levels + + # Delete files to empty DB and return it to normal levels else: # Database back to normal for i in range(91): - delete_file(testdir1, f'test{i}') + delete_file(os.path.join(testdir1, f'test{i}')) wait_for_scheduled_scan(True, interval=scan_delay, monitor=wazuh_log_monitor) - #Look for file_limit percentage alert configure value and check it matches with the expected percentage - if percentage >= 80: + + # Look for file_limit percentage alert configure value and check it matches with the expected percentage + if percentage >= 80: file_limit_capacity = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=generate_monitoring_callback(CB_FILE_LIMIT_CAPACITY), error_message=ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT).result() assert file_limit_capacity == str(percentage), ERR_MSG_WRONG_CAPACITY_LOG_DB_LIMIT - # Check the is back on normal levels - else: - event_found = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_FILE_LIMIT_BACK_TO_NORMAL), - error_message=ERR_MSG_DB_BACK_TO_NORMAL).result() # Get entries and path counts and check they match the expected values entries, path_count = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_default.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_default.py index 9df545a221..ecc5d41912 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_default.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_default.py @@ -64,12 +64,14 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params +from wazuh_testing import global_parameters, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import ERR_MSG_FILE_LIMIT_VALUES, CB_FILE_LIMIT_VALUE, ERR_MSG_WRONG_FILE_LIMIT_VALUE +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (ERR_MSG_FILE_LIMIT_VALUES, CB_FILE_LIMIT_VALUE, + ERR_MSG_WRONG_FILE_LIMIT_VALUE) +from wazuh_testing.modules.fim.utils import generate_params # Marks @@ -77,7 +79,6 @@ # Variables test_directories = [os.path.join(PREFIX, 'testdir1')] - directory_str = ','.join(test_directories) wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') @@ -102,20 +103,22 @@ def get_configuration(request): # Tests - -@pytest.mark.skipif(sys.platform == 'win32', reason="Blocked by wazuh/wazuh#11819") -def test_file_limit_default(get_configuration, configure_environment, restart_syscheckd): +def test_file_limit_default(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the maximum number of files monitored by the 'wazuh-syscheckd' daemon is set to default when the 'file_limit' tag is missing in the configuration. For this purpose, the test will monitor a directory and wait for FIM to start and generate an event indicating the maximum number of files to monitor. Finally, the test will verify that this number matches the default value (100000). - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -135,14 +138,14 @@ def test_file_limit_default(get_configuration, configure_environment, restart_sy combined with the testing directory to be monitored defined in this module. expected_output: - - r'.*Maximum number of entries to be monitored' + - r'.*Maximum number of files to be monitored' tags: - scheduled - realtime - who_data ''' - #Check the file limit configured and that it matches expected value (100000) + # Check the file limit configured and that it matches expected value (100000) file_limit_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=generate_monitoring_callback(CB_FILE_LIMIT_VALUE), error_message=ERR_MSG_FILE_LIMIT_VALUES).result() diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_delete_full.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_delete_full.py index db7b851cb7..425901b694 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_delete_full.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_delete_full.py @@ -68,12 +68,15 @@ from wazuh_testing.fim import LOG_FILE_PATH, delete_file, generate_params, create_file, REGULAR from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file, delete_file from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import (ERR_MSG_DATABASE_FULL_ALERT_EVENT, CB_FILE_LIMIT_CAPACITY, - ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL, ERR_MSG_NO_EVENTS_EXPECTED, ERR_MSG_DELETED_EVENT_NOT_RECIEVED) -from wazuh_testing.fim_module.event_monitor import callback_detect_event -# Marks +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (callback_detect_event, ERR_MSG_DATABASE_FULL_ALERT_EVENT, + CB_FILE_LIMIT_CAPACITY, ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL, + ERR_MSG_NO_EVENTS_EXPECTED, ERR_MSG_DELETED_EVENT_NOT_RECIEVED) +from wazuh_testing.modules.fim.utils import generate_params +# Marks pytestmark = [pytest.mark.tier(level=1)] # Variables @@ -123,7 +126,8 @@ def extra_configuration_before_yield(): @pytest.mark.parametrize('folder, file_name', [(testdir1, f'{base_file_name}{1}')]) -def test_file_limit_delete_full(folder, file_name, get_configuration, configure_environment, restart_syscheckd): +def test_file_limit_delete_full(folder, file_name, configure_local_internal_options_module, get_configuration, + configure_environment, restart_syscheckd): ''' description: Check a specific case. If a testing file ('test_file1') is not inserted in the FIM database (because the maximum number of files to be monitored has already been reached), and another @@ -134,7 +138,7 @@ def test_file_limit_delete_full(folder, file_name, get_configuration, configure_ no FIM events to be generated (file limit reached). Finally, it will delete 'test_file10' and verify that the 'deleted' FIM event matches that file. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -145,6 +149,9 @@ def test_file_limit_delete_full(folder, file_name, get_configuration, configure_ - file_name: type: str brief: Name of the testing file to be created. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -168,7 +175,7 @@ def test_file_limit_delete_full(folder, file_name, get_configuration, configure_ the testing directory to be monitored defined in this module. expected_output: - - r'.*Sending DB * full alert.' + - r'.*File database is (\\d+)% full' - r'.*Sending FIM event: (.+)$' ('deleted' event) tags: @@ -186,7 +193,7 @@ def test_file_limit_delete_full(folder, file_name, get_configuration, configure_ create_file(REGULAR, testdir1, file_name) sleep(sleep_time) # Delete the file created - Should not generate events - delete_file(folder, file_name) + delete_file(os.path.join(folder, file_name)) # Check no Creation or Deleted event has been generated with pytest.raises(TimeoutError): @@ -195,7 +202,7 @@ def test_file_limit_delete_full(folder, file_name, get_configuration, configure_ assert event is None, ERR_MSG_NO_EVENTS_EXPECTED # Delete the first file that was created (It is included in DB) - delete_file(folder, f'{file_name}{0}') + delete_file(os.path.join(folder, f'{file_name}{0}')) #Get that the file deleted generetes an event and assert the event data path. event = wazuh_log_monitor.start(timeout=T_20, diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_full.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_full.py index 479d0f800b..a47ab6aa7a 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_full.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_full.py @@ -65,18 +65,20 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params, create_file, REGULAR +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import(CB_FILE_LIMIT_CAPACITY, ERR_MSG_DATABASE_FULL_ALERT_EVENT, - ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL, CB_DATABASE_FULL_COULD_NOT_INSERT, ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT, - ERR_MSG_FIM_INODE_ENTRIES, ERR_MSG_WRONG_INODE_PATH_COUNT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES) -from wazuh_testing.fim_module.event_monitor import callback_entries_path_count +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (callback_entries_path_count, CB_FILE_LIMIT_CAPACITY, + ERR_MSG_DATABASE_FULL_ALERT_EVENT, ERR_MSG_FIM_INODE_ENTRIES, + ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL, + ERR_MSG_WRONG_INODE_PATH_COUNT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES) +from wazuh_testing.modules.fim.utils import generate_params -# Marks +# Marks pytestmark = [pytest.mark.tier(level=1)] # Variables @@ -95,7 +97,8 @@ conf_params = {'TEST_DIRECTORIES': testdir1} params, metadata = generate_params(extra_params=conf_params, - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list)) + apply_to_all=({'FILE_LIMIT': file_limit_elem} for + file_limit_elem in file_limit_list)) configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) @@ -119,8 +122,8 @@ def extra_configuration_before_yield(): # Tests - -def test_file_limit_full( get_configuration, configure_environment, restart_syscheckd): +def test_file_limit_full(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon generates proper events while the FIM database is in 'full database alert' mode for reaching the limit of files to monitor set in the 'file_limit' tag. @@ -129,14 +132,14 @@ def test_file_limit_full( get_configuration, configure_environment, restart_sysc when a new testing file is added to the monitored directory. Finally, the test will verify that on the FIM event, inodes and monitored files number match. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -157,7 +160,7 @@ def test_file_limit_full( get_configuration, configure_environment, restart_sysc combined with the testing directory to be monitored defined in this module. expected_output: - - r'.*Sending DB * full alert.' + - r'.*File database is (\\d+)% full' - r'.*The DB is full.*' - r'.*Fim inode entries*, path count' - r'.*Fim entries' (on Windows systems) @@ -167,19 +170,15 @@ def test_file_limit_full( get_configuration, configure_environment, restart_sysc - who_data - realtime ''' - #Check that database is full and assert database usage percentage is 100% + # Check that database is full and assert database usage percentage is 100% database_state = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=generate_monitoring_callback(CB_FILE_LIMIT_CAPACITY), error_message=ERR_MSG_DATABASE_FULL_ALERT_EVENT).result() assert database_state == '100', ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL - + # Create a file with the database being full - Should not generate events create_file(REGULAR, testdir1, 'file_full', content='content') - # Check new file could not be added to DB - wazuh_log_monitor.start(timeout=monitor_timeout, callback=generate_monitoring_callback(CB_DATABASE_FULL_COULD_NOT_INSERT), - error_message=ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT) - # Check number of entries and paths in DB and assert the value matches the expected count entries, path_count = wazuh_log_monitor.start(timeout=monitor_timeout, callback=callback_entries_path_count, error_message=ERR_MSG_FIM_INODE_ENTRIES).result() diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_no_limit.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_no_limit.py index 09111633ed..267ebd3172 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_no_limit.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_no_limit.py @@ -76,16 +76,18 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params + +from wazuh_testing import global_parameters, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import (ERR_MSG_FILE_LIMIT_DISABLED, CB_FILE_LIMIT_DISABLED) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_FILE_LIMIT_DISABLED, CB_FILE_LIMIT_DISABLED +from wazuh_testing.modules import TIER1 +from wazuh_testing.modules.fim.utils import generate_params # Marks - -pytestmark = [pytest.mark.tier(level=1)] +pytestmark = [TIER1] # Variables test_directories = [os.path.join(PREFIX, 'testdir1')] @@ -114,19 +116,21 @@ def get_configuration(request): # Tests - -@pytest.mark.skipif(sys.platform == 'win32', reason="Blocked by wazuh/wazuh #11162") -def test_file_limit_no_limit(get_configuration, configure_environment, restart_syscheckd): +def test_file_limit_no_limit(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon detects that the 'file_limit' feature of FIM is disabled. For this purpose, the test will monitor a testing directory, and finally, it will verify that the FIM event 'no limit' is generated. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. diff --git a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_values.py b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_values.py index cd7ad659b7..833a15e1d3 100644 --- a/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_values.py +++ b/tests/integration/test_fim/test_files/test_file_limit/test_file_limit_values.py @@ -65,28 +65,31 @@ import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params, create_file, REGULAR + +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.fim_module import (ERR_MSG_FILE_LIMIT_VALUES, CB_FILE_LIMIT_VALUE, ERR_MSG_WRONG_FILE_LIMIT_VALUE, - ERR_MSG_FIM_INODE_ENTRIES, ERR_MSG_WRONG_INODE_PATH_COUNT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES) -from wazuh_testing.fim_module.event_monitor import callback_entries_path_count +from wazuh_testing.modules import TIER1 +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (ERR_MSG_FILE_LIMIT_VALUES, CB_FILE_LIMIT_VALUE, + ERR_MSG_WRONG_FILE_LIMIT_VALUE, ERR_MSG_FIM_INODE_ENTRIES, + ERR_MSG_WRONG_INODE_PATH_COUNT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, + callback_entries_path_count) +from wazuh_testing.modules.fim.utils import generate_params # Marks - -pytestmark = [pytest.mark.tier(level=1)] +pytestmark = [TIER1] # Variables test_directories = [os.path.join(PREFIX, 'testdir1')] - directory_str = ','.join(test_directories) wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') testdir1 = test_directories[0] -monitor_timeout= 40 +monitor_timeout = 40 # Configurations @@ -94,7 +97,8 @@ conf_params = {'TEST_DIRECTORIES': testdir1} params, metadata = generate_params(extra_params=conf_params, - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list)) + apply_to_all=({'FILE_LIMIT': file_limit_elem} for + file_limit_elem in file_limit_list)) configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) @@ -118,9 +122,8 @@ def extra_configuration_before_yield(): # Tests - -@pytest.mark.skipif(sys.platform == 'win32', reason="Blocked by wazuh/wazuh #11819") -def test_file_limit_values(get_configuration, configure_environment, restart_syscheckd): +def test_file_limit_values(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon detects that the value of the 'entries' tag, which corresponds to the maximum number of files to monitor from the 'file_limit' feature of FIM. For this purpose, @@ -128,11 +131,14 @@ def test_file_limit_values(get_configuration, configure_environment, restart_sys is generated and has the correct value. Finally, the test will verify that on the FIM event, inodes and monitored files number match. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -164,13 +170,12 @@ def test_file_limit_values(get_configuration, configure_environment, restart_sys # assert it matches the expected value assert file_limit_value == get_configuration['metadata']['file_limit'], ERR_MSG_WRONG_FILE_LIMIT_VALUE - # Check number of entries and paths in DB and assert the value matches the expected count entries, path_count = wazuh_log_monitor.start(timeout=monitor_timeout, callback=callback_entries_path_count, error_message=ERR_MSG_FIM_INODE_ENTRIES).result() if sys.platform != 'win32': assert (entries == get_configuration['metadata']['file_limit'] and - path_count == get_configuration['metadata']['file_limit']), ERR_MSG_WRONG_INODE_PATH_COUNT + path_count == get_configuration['metadata']['file_limit']), ERR_MSG_WRONG_INODE_PATH_COUNT else: assert entries == str(get_configuration['metadata']['file_limit']), ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_conf.yaml b/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_conf.yaml index 105eb0e197..ae55625ae5 100644 --- a/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_conf.yaml +++ b/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_conf.yaml @@ -1,22 +1,33 @@ - ---- -# conf 1 - tags: - - max_eps + - apply_to_modules: - - MODULE_NAME + - test_max_eps sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - max_eps: - value: MAX_EPS - - synchronization: - elements: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - max_eps: + value: MAX_EPS + - synchronization: + elements: + - enabled: + value: 'no' + - section: sca + elements: - enabled: value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_sync_conf_max_eps.yaml b/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_sync_conf_max_eps.yaml index fd04113b64..4b42afc901 100644 --- a/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_sync_conf_max_eps.yaml +++ b/tests/integration/test_fim/test_files/test_max_eps/data/wazuh_sync_conf_max_eps.yaml @@ -1,32 +1,32 @@ -# Configuration for sync max eps enabled -- tags: - - max_eps_synchronization - apply_to_modules: - - test_max_eps_synchronization - sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - synchronization: - elements: - - max_eps: - value: MAX_EPS - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' +# Configuration for sync max eps enabled +- tags: + - + apply_to_modules: + - test_sync_max_eps_scheduled + sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - synchronization: + elements: + - max_eps: + value: MAX_EPS + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_max_eps/test_max_eps.py b/tests/integration/test_fim/test_files/test_max_eps/test_max_eps.py index 9adf4b256c..d473c6c71d 100644 --- a/tests/integration/test_fim/test_files/test_max_eps/test_max_eps.py +++ b/tests/integration/test_fim/test_files/test_max_eps/test_max_eps.py @@ -60,37 +60,41 @@ - fim_max_eps ''' import os -from collections import Counter - +import sys import pytest -from wazuh_testing.fim import LOG_FILE_PATH, REGULAR, create_file, generate_params, callback_event_message, \ - check_time_travel +import time + +from collections import Counter +from wazuh_testing import logger, LOG_FILE_PATH from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.tools.file import write_file +from wazuh_testing.modules.fim import TEST_DIR_1, REALTIME_MODE, WHODATA_MODE +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (ERR_MSG_MULTIPLE_FILES_CREATION, callback_integrity_message, + CB_PATH_MONITORED_REALTIME, ERR_MSG_MONITORING_PATH, + CB_PATH_MONITORED_WHODATA, CB_PATH_MONITORED_WHODATA_WINDOWS) +from wazuh_testing.modules.fim.utils import generate_params -# Marks +# Marks pytestmark = pytest.mark.tier(level=1) # Variables -test_directories = [os.path.join(PREFIX, 'testdir1')] - -directory_str = ','.join(test_directories) - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') -testdir1 = os.path.join(PREFIX, 'testdir1') +test_directories = [os.path.join(PREFIX, TEST_DIR_1)] +TIMEOUT = 180 # Configurations -conf_params = {'TEST_DIRECTORIES': directory_str, - 'MODULE_NAME': __name__} +conf_params = {'TEST_DIRECTORIES': test_directories[0]} eps_values = ['50', '10'] p, m = generate_params(extra_params=conf_params, apply_to_all=({'MAX_EPS': eps_value} for eps_value in eps_values), - modes=['scheduled']) + modes=[REALTIME_MODE, WHODATA_MODE]) configurations = load_wazuh_configurations(configurations_path, __name__, params=p, metadata=m) @@ -102,7 +106,21 @@ def get_configuration(request): return request.param -def test_max_eps(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start): +def create_multiple_files(get_configuration): + """Create multiple files of a specific type.""" + max_eps = get_configuration['metadata']['max_eps'] + mode = get_configuration['metadata']['fim_mode'] + try: + for i in range(int(max_eps) * 4): + file_name = f'file{i}_to_max_eps_{max_eps}_{mode}_mode{time.time()}' + path = os.path.join(test_directories[0], file_name) + write_file(path) + except OSError: + logger.info(ERR_MSG_MULTIPLE_FILES_CREATION) + + +@pytest.mark.skip("This test is affected by Issue #15844, when it is fixed it should be enabled again.") +def test_max_eps(configure_local_internal_options_module, get_configuration, configure_environment, restart_wazuh): ''' description: Check if the 'wazuh-syscheckd' daemon applies the limit set in the 'max_eps' tag when a lot of 'syscheck' events are generated. For this purpose, the test will monitor a folder, @@ -111,7 +129,7 @@ def test_max_eps(get_configuration, configure_environment, restart_syscheckd, wa the testing files created. Finally, it will verify the limit of events per second (eps) is not exceeded by checking the creation time of the testing files. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -141,29 +159,27 @@ def test_max_eps(get_configuration, configure_environment, restart_syscheckd, wa - r'.*Sending FIM event: (.+)$' ('added' events) tags: - - realtime - scheduled ''' - check_apply_test({'max_eps'}, get_configuration['tags']) - max_eps = int(get_configuration['metadata']['max_eps']) mode = get_configuration['metadata']['fim_mode'] - + if sys.platform == 'win32': + monitoring_regex = CB_PATH_MONITORED_REALTIME if mode == 'realtime' else CB_PATH_MONITORED_WHODATA_WINDOWS + else: + monitoring_regex = CB_PATH_MONITORED_REALTIME if mode == 'realtime' else CB_PATH_MONITORED_WHODATA + + result = wazuh_log_monitor.start(timeout=TIMEOUT, + callback=generate_monitoring_callback(monitoring_regex), + error_message=ERR_MSG_MONITORING_PATH).result() + create_multiple_files(get_configuration) # Create files to read max_eps files with added events - for i in range(int(max_eps) * 5): - create_file(REGULAR, testdir1, f'test{i}_{mode}_{max_eps}', content='') - - check_time_travel(mode == "scheduled") - n_results = max_eps * 5 - - result = wazuh_log_monitor.start(timeout=(n_results / max_eps) * 6, + n_results = max_eps * 3 + result = wazuh_log_monitor.start(timeout=TIMEOUT, accum_results=n_results, - callback=callback_event_message, + callback=callback_integrity_message, error_message=f'Received less results than expected ({n_results})').result() counter = Counter([date_time for date_time, _ in result]) - error_margin = (max_eps * 0.1) for _, n_occurrences in counter.items(): - assert n_occurrences <= round( - max_eps + error_margin), f'Sent {n_occurrences} but a maximum of {max_eps} was set' + assert n_occurrences <= max_eps, f'Sent {n_occurrences} but a maximum of {max_eps} was set' diff --git a/tests/integration/test_fim/test_files/test_max_eps/test_max_eps_synchronization.py b/tests/integration/test_fim/test_files/test_max_eps/test_sync_max_eps_scheduled.py similarity index 81% rename from tests/integration/test_fim/test_files/test_max_eps/test_max_eps_synchronization.py rename to tests/integration/test_fim/test_files/test_max_eps/test_sync_max_eps_scheduled.py index 68739fc4dc..c95a0ec595 100644 --- a/tests/integration/test_fim/test_files/test_max_eps/test_max_eps_synchronization.py +++ b/tests/integration/test_fim/test_files/test_max_eps/test_sync_max_eps_scheduled.py @@ -48,8 +48,6 @@ pytest_args: - fim_mode: - realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. - whodata: Implies real-time monitoring but adding the 'who-data' information. scheduled: Implies scheduled scan - tier: @@ -62,20 +60,21 @@ ''' import os import pytest +import time from collections import Counter from wazuh_testing import logger -from wazuh_testing.tools import PREFIX -from wazuh_testing.fim import LOG_FILE_PATH, generate_params +from wazuh_testing.tools import PREFIX, LOG_FILE_PATH from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.modules import DATA, TIER1, AGENT, WINDOWS, LINUX -from wazuh_testing.modules.fim import (TEST_DIR_1, TEST_DIRECTORIES, YAML_CONF_MAX_EPS_SYNC, - ERR_MSG_AGENT_DISCONNECT, ERR_MSG_INTEGRITY_CONTROL_MSG, - SCHEDULE_MODE, REALTIME_MODE, WHODATA_MODE) +from wazuh_testing.modules.fim import TEST_DIR_1, TEST_DIRECTORIES, YAML_CONF_MAX_EPS_SYNC, SCHEDULED_MODE from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options -from wazuh_testing.modules.fim.event_monitor import callback_integrity_message, callback_connection_message +from wazuh_testing.modules.fim.event_monitor import (callback_integrity_message, ERR_MSG_INTEGRITY_CONTROL_MSG, + ERR_MSG_MULTIPLE_FILES_CREATION) from wazuh_testing.tools.file import delete_path_recursively, write_file +from wazuh_testing.modules.fim.utils import generate_params + # Marks pytestmark = [TIER1, AGENT, WINDOWS, LINUX] @@ -89,9 +88,7 @@ test_directory = os.path.join(PREFIX, TEST_DIR_1) conf_params = {TEST_DIRECTORIES: test_directory} -ERR_MSG_MULTIPLE_FILES_CREATION = 'Multiple files could not be created.' -TIMEOUT_CHECK_AGENT_CONNECT = 10 TIMEOUT_CHECK_INTEGRATY_START = 30 TIMEOUT_CHECK_EACH_INTEGRITY_MSG = 90 @@ -101,13 +98,13 @@ eps_values = ['1', '100'] parameters, metadata = generate_params(extra_params=conf_params, - modes=[SCHEDULE_MODE, REALTIME_MODE, WHODATA_MODE], + modes=[SCHEDULED_MODE], apply_to_all=({'MAX_EPS': eps_value} for eps_value in eps_values)) configurations = load_wazuh_configurations(configurations_path, __name__, params=parameters, metadata=metadata) configuration_ids = [f"{x['fim_mode']}_mode_{x['max_eps']}_max_eps" for x in metadata] -# Fixtures +# Fixtures @pytest.fixture(scope='module', params=configurations, ids=configuration_ids) def get_configuration(request): """Get configurations from the module.""" @@ -122,12 +119,13 @@ def create_multiple_files(get_configuration): os.makedirs(test_directory, exist_ok=True, mode=0o777) try: for i in range(int(max_eps) + 5): - file_name = f'file{i}_to_max_eps_{max_eps}_{mode}_mode' + file_name = f'file{i}_to_max_eps_{max_eps}_{mode}_mode{time.time()}' path = os.path.join(test_directory, file_name) write_file(path) except OSError: logger.info(ERR_MSG_MULTIPLE_FILES_CREATION) + # Tests def test_max_eps_sync_valid_within_range(configure_local_internal_options_module, get_configuration, create_multiple_files, configure_environment, restart_wazuh): @@ -161,9 +159,6 @@ def test_max_eps_sync_valid_within_range(configure_local_internal_options_module - restart_wazuh: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. - - delete_files: - type: fixture - brief: Delete the testing files when the test ends. assertions: - Verify that FIM 'integrity' events are generated for each testing file created. @@ -174,7 +169,6 @@ def test_max_eps_sync_valid_within_range(configure_local_internal_options_module the 'wazuh-syscheckd' daemon and, these are combined with the testing directories to be monitored defined in the module. expected_output: - - r'.* Connected to the server .*' - r'.*Sending integrity control message' tags: @@ -185,11 +179,6 @@ def test_max_eps_sync_valid_within_range(configure_local_internal_options_module try: max_eps = int(get_configuration['metadata']['max_eps']) - # Wait until the agent connects to the manager. - wazuh_log_monitor.start(timeout=TIMEOUT_CHECK_AGENT_CONNECT, - callback=callback_connection_message, - error_message=ERR_MSG_AGENT_DISCONNECT).result() - # Find integrity start before attempting to read max_eps. wazuh_log_monitor.start(timeout=TIMEOUT_CHECK_INTEGRATY_START, callback=callback_integrity_message, @@ -197,11 +186,10 @@ def test_max_eps_sync_valid_within_range(configure_local_internal_options_module # Find integrity message for each file created after read max_eps. total_file_created = max_eps + 5 - result = wazuh_log_monitor.start(timeout=TIMEOUT_CHECK_EACH_INTEGRITY_MSG, - accum_results=total_file_created, - callback=callback_integrity_message, - error_message=f'Received less results than expected ({total_file_created})').result() - + result = wazuh_log_monitor.start(timeout=TIMEOUT_CHECK_EACH_INTEGRITY_MSG, accum_results=total_file_created, + callback=callback_integrity_message, + error_message=f'Received less results than expected\ + ({total_file_created})').result() # Collect by time received the messages. counter = Counter([date_time for date_time, _ in result]) diff --git a/tests/integration/test_fim/test_files/test_report_changes/data/configuration_template/configuration_report_changes_and_diff.yaml b/tests/integration/test_fim/test_files/test_report_changes/data/configuration_template/configuration_report_changes_and_diff.yaml new file mode 100644 index 0000000000..82826902d0 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_report_changes/data/configuration_template/configuration_report_changes_and_diff.yaml @@ -0,0 +1,33 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: INTERVAL + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - realtime: REALTIME + - whodata: WHODATA + - report_changes: 'yes' + - nodiff: + value: NODIFF_FILE + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_report_changes/data/test_cases/cases_report_changes_and_diff.yaml b/tests/integration/test_fim/test_files/test_report_changes/data/test_cases/cases_report_changes_and_diff.yaml new file mode 100644 index 0000000000..38cc3b324c --- /dev/null +++ b/tests/integration/test_fim/test_files/test_report_changes/data/test_cases/cases_report_changes_and_diff.yaml @@ -0,0 +1,59 @@ +- name: report_changes_found_scheduled + description: When a file is monitored with report_changes, the diff file and changes are reported (Scheduled mode) + configuration_parameters: + INTERVAL: 5 + REALTIME: 'no' + WHODATA: 'no' + metadata: + folder: testdir_reports + fim_mode: scheduled + +- name: report_changes_truncated_scheduled + description: When a file is set to nodiff, report_changes information is truncated (Scheduled mode) + configuration_parameters: + INTERVAL: 5 + REALTIME: 'no' + WHODATA: 'no' + metadata: + folder: testdir_nodiff + fim_mode: scheduled + +- name: report_changes_found_realtime + description: When a file is monitored with report_changes, the diff file and changes are reported (Realtime mode) + configuration_parameters: + INTERVAL: 1000 + REALTIME: 'yes' + WHODATA: 'no' + metadata: + folder: testdir_reports + fim_mode: realtime + +- name: report_changes_truncated_realtime + description: When a file is set to nodiff, report_changes information is truncated (Realtime mode) + configuration_parameters: + INTERVAL: 1000 + REALTIME: 'yes' + WHODATA: 'no' + metadata: + folder: testdir_nodiff + fim_mode: realtime + +- name: report_changes_found_whodata + description: When a file is monitored with report_changes, the diff file and changes are reported (Whodata mode) + configuration_parameters: + INTERVAL: 1000 + REALTIME: 'no' + WHODATA: 'yes' + metadata: + folder: testdir_reports + fim_mode: whodata + +- name: report_changes_truncated_whodata + description: When a file is set to nodiff, report_changes information is truncated (Whodata mode) + configuration_parameters: + INTERVAL: 1000 + REALTIME: 'no' + WHODATA: 'yes' + metadata: + folder: testdir_nodiff + fim_mode: whodata diff --git a/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf.yaml b/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf.yaml index 6a98f8a0b1..c3cb36c188 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf.yaml +++ b/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf.yaml @@ -1,86 +1,130 @@ ---- # conf 1 - tags: - - ossec_conf_report + - ossec_conf_report apply_to_modules: - - MODULE_NAME + - test_large_changes + - test_report_deleted_diff sections: - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - REPORT_CHANGES - - nodiff: - value: NODIFF_FILE + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 3 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - REPORT_CHANGES + - nodiff: + value: NODIFF_FILE + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 2 - tags: - - ossec_conf_diff + - apply_to_modules: - - MODULE_NAME + - test_file_size_default + - test_disk_quota_default sections: - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - REPORT_CHANGES - - diff: - elements: - - file_size: + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 3 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - REPORT_CHANGES + +# conf 2 +- tags: + - + apply_to_modules: + - test_file_size_disabled + - test_file_size_values + - test_disk_quota_disabled + sections: + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 3 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - REPORT_CHANGES + - diff: elements: - - enabled: - value: FILE_SIZE_ENABLED - - limit: - value: FILE_SIZE_LIMIT + - file_size: + elements: + - enabled: + value: FILE_SIZE_ENABLED + - limit: + value: FILE_SIZE_LIMIT - disk_quota: elements: - - enabled: - value: DISK_QUOTA_ENABLED - - limit: - value: DISK_QUOTA_LIMIT + - enabled: + value: DISK_QUOTA_ENABLED + - limit: + value: DISK_QUOTA_LIMIT diff --git a/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf_diff.yaml b/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf_diff.yaml index 81b825d002..7bea2e77a6 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf_diff.yaml +++ b/tests/integration/test_fim/test_files/test_report_changes/data/wazuh_conf_diff.yaml @@ -1,35 +1,37 @@ - tags: - - ossec_conf_diff_size_limit + - apply_to_modules: - - test_diff_size_limit_configured - - test_diff_size_limit_default + - test_diff_size_limit_configured + - test_diff_size_limit_default sections: - - section: sca - elements: - - enabled: - value: 'no' - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - REPORT_CHANGES - - DIFF_SIZE_LIMIT - - diff: - elements: - - file_size: + - section: sca + elements: + - enabled: + value: 'no' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 3 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - REPORT_CHANGES + - DIFF_SIZE_LIMIT + - diff: elements: - - enabled: - value: FILE_SIZE_ENABLED - - limit: - value: FILE_SIZE_LIMIT - - disk_quota: - elements: - - enabled: - value: DISK_QUOTA_ENABLED - - limit: - value: DISK_QUOTA_LIMIT + - file_size: + elements: + - enabled: + value: FILE_SIZE_ENABLED + - limit: + value: FILE_SIZE_LIMIT + - disk_quota: + elements: + - enabled: + value: DISK_QUOTA_ENABLED + - limit: + value: DISK_QUOTA_LIMIT diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_configured.py b/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_configured.py index 0c4dde18a6..5474e54fb8 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_configured.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_configured.py @@ -68,19 +68,18 @@ - fim_report_changes ''' import os - import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params + +from wazuh_testing import global_parameters, DATA, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations -from wazuh_testing.fim_module.fim_variables import (TEST_DIR_1, YAML_CONF_DIFF, DIFF_LIMIT_VALUE, - DIFF_SIZE_LIMIT, DISK_QUOTA_ENABLED, DISK_QUOTA_LIMIT, - FILE_SIZE_ENABLED, FILE_SIZE_LIMIT, CB_MAXIMUM_FILE_SIZE, - REPORT_CHANGES, TEST_DIRECTORIES, ERR_MSG_MAXIMUM_FILE_SIZE, - ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE) -from wazuh_testing.wazuh_variables import DATA, SYSCHECK_DEBUG, VERBOSE_DEBUG_OUTPUT from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim import (TEST_DIR_1, REPORT_CHANGES, TEST_DIRECTORIES, DIFF_LIMIT_VALUE, FILE_SIZE_LIMIT, + DIFF_SIZE_LIMIT, DISK_QUOTA_ENABLED, DISK_QUOTA_LIMIT, FILE_SIZE_ENABLED) +from wazuh_testing.modules.fim.event_monitor import (CB_MAXIMUM_FILE_SIZE, ERR_MSG_MAXIMUM_FILE_SIZE, + ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE) +from wazuh_testing.modules.fim.utils import generate_params # Marks @@ -92,7 +91,7 @@ wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directory = os.path.join(PREFIX, TEST_DIR_1) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), DATA) -configurations_path = os.path.join(test_data_path, YAML_CONF_DIFF) +configurations_path = os.path.join(test_data_path, 'wazuh_conf_diff.yaml') # Configurations @@ -106,11 +105,9 @@ DISK_QUOTA_LIMIT: '2KB'}) configurations = load_wazuh_configurations(configurations_path, __name__, params=parameters, metadata=metadata) -local_internal_options = {SYSCHECK_DEBUG: VERBOSE_DEBUG_OUTPUT} - -# Fixtures +# Fixtures @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -118,9 +115,8 @@ def get_configuration(request): # Tests - -def test_diff_size_limit_configured(configure_local_internal_options_module, get_configuration, - configure_environment, restart_syscheckd): +def test_diff_size_limit_configured(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of 'diff' information to generate from the value set in the 'diff_size_limit' attribute when the global 'file_size' tag is different. @@ -129,7 +125,7 @@ def test_diff_size_limit_configured(configure_local_internal_options_module, get the test will verify that the value gotten from that FIM event corresponds with the one set in the 'diff_size_limit'. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -166,9 +162,8 @@ def test_diff_size_limit_configured(configure_local_internal_options_module, get - who_data ''' - diff_size_value = wazuh_log_monitor.start( - timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_MAXIMUM_FILE_SIZE), - error_message=ERR_MSG_MAXIMUM_FILE_SIZE).result() + diff_size_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_MAXIMUM_FILE_SIZE), + error_message=ERR_MSG_MAXIMUM_FILE_SIZE).result() assert diff_size_value == str(DIFF_LIMIT_VALUE), ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_default.py b/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_default.py index 2bcbeedca4..8aea598431 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_default.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_diff_size_limit_default.py @@ -30,7 +30,7 @@ - linux - windows - macos - - solaris + - solaris os_version: - Arch Linux @@ -70,18 +70,15 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params +from wazuh_testing import global_parameters, DATA, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.wazuh_variables import DATA -from wazuh_testing.fim_module.fim_variables import (DIFF_DEFAULT_LIMIT_VALUE, CB_MAXIMUM_FILE_SIZE, - REPORT_CHANGES, TEST_DIR_1, TEST_DIRECTORIES, - YAML_CONF_DIFF, ERR_MSG_MAXIMUM_FILE_SIZE, - ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE) -from wazuh_testing.wazuh_variables import SYSCHECK_DEBUG, VERBOSE_DEBUG_OUTPUT - +from wazuh_testing.modules.fim import DIFF_DEFAULT_LIMIT_VALUE, REPORT_CHANGES, TEST_DIR_1, TEST_DIRECTORIES +from wazuh_testing.modules.fim.event_monitor import (CB_MAXIMUM_FILE_SIZE, ERR_MSG_MAXIMUM_FILE_SIZE, + ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE) +from wazuh_testing.modules.fim.utils import generate_params +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options # Marks pytestmark = [pytest.mark.tier(level=1)] @@ -91,7 +88,7 @@ wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directory = os.path.join(PREFIX, TEST_DIR_1) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), DATA) -configurations_path = os.path.join(test_data_path, YAML_CONF_DIFF) +configurations_path = os.path.join(test_data_path, 'wazuh_conf_diff.yaml') # Configurations @@ -100,7 +97,6 @@ TEST_DIRECTORIES: test_directory}) configurations = load_wazuh_configurations(configurations_path, __name__, params=parameters, metadata=metadata) -local_internal_options = {SYSCHECK_DEBUG: VERBOSE_DEBUG_OUTPUT} # Fixtures @@ -113,9 +109,8 @@ def get_configuration(request): # Tests - -def test_diff_size_limit_default(configure_local_internal_options_module, get_configuration, - configure_environment, restart_syscheckd): +def test_diff_size_limit_default(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of 'diff' information to generate from the default value of the 'diff_size_limit' attribute. For this purpose, the test will monitor @@ -123,7 +118,7 @@ def test_diff_size_limit_default(configure_local_internal_options_module, get_co file size to generate 'diff' information. Finally, the test will verify that the value gotten from that FIM event corresponds with the default value of the 'diff_size_limit' attribute (50MB). - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -160,10 +155,8 @@ def test_diff_size_limit_default(configure_local_internal_options_module, get_co - who_data ''' - diff_size_value = wazuh_log_monitor.start( - timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_MAXIMUM_FILE_SIZE), - error_message=ERR_MSG_MAXIMUM_FILE_SIZE - ).result() + diff_size_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_MAXIMUM_FILE_SIZE), + error_message=ERR_MSG_MAXIMUM_FILE_SIZE).result() assert diff_size_value == str(DIFF_DEFAULT_LIMIT_VALUE), ERR_MSG_WRONG_VALUE_MAXIMUM_FILE_SIZE diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_default.py b/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_default.py index f884a3b378..101bf75634 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_default.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_default.py @@ -63,18 +63,20 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, callback_disk_quota_default, generate_params +from wazuh_testing import global_parameters, LOG_FILE_PATH from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import CB_DISK_QUOTA_LIMIT_CONFIGURED_VALUE, ERR_MSG_DISK_QUOTA_LIMIT +from wazuh_testing.modules.fim.utils import generate_params + # Marks pytestmark = [pytest.mark.tier(level=1)] # Variables - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directories = [os.path.join(PREFIX, 'testdir1')] directory_str = ','.join(test_directories) @@ -86,8 +88,7 @@ # Configurations conf_params, conf_metadata = generate_params(extra_params={'REPORT_CHANGES': {'report_changes': 'yes'}, - 'TEST_DIRECTORIES': directory_str, - 'MODULE_NAME': __name__}) + 'TEST_DIRECTORIES': directory_str}) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) @@ -102,10 +103,8 @@ def get_configuration(request): # Tests -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_diff_default'} -]) -def test_disk_quota_default(tags_to_apply, get_configuration, configure_environment, restart_syscheckd): +def test_disk_quota_default(get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of the folder where the data used to perform the 'diff' operations is stored to the default value. For this purpose, the test will monitor @@ -113,20 +112,20 @@ def test_disk_quota_default(tags_to_apply, get_configuration, configure_environm disk quota to store 'diff' information. Finally, the test will verify that the value gotten from that FIM event corresponds with the default value of the 'disk_quota' tag (1GB). - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - get_configuration: type: fixture brief: Get configurations from the module. - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -147,13 +146,11 @@ def test_disk_quota_default(tags_to_apply, get_configuration, configure_environm - disk_quota - scheduled ''' - check_apply_test(tags_to_apply, get_configuration['tags']) disk_quota_value = wazuh_log_monitor.start( timeout=global_parameters.default_timeout, - callback=callback_disk_quota_default, - error_message='Did not receive expected "Maximum disk quota size limit configured to \'... KB\'." event' - ).result() + callback=generate_monitoring_callback(CB_DISK_QUOTA_LIMIT_CONFIGURED_VALUE), + error_message=ERR_MSG_DISK_QUOTA_LIMIT).result() if disk_quota_value: assert disk_quota_value == str(DEFAULT_SIZE), 'Wrong value for disk_quota' diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_disabled.py b/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_disabled.py index 807535d406..883964128b 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_disabled.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_disk_quota_disabled.py @@ -63,20 +63,20 @@ import os import pytest -from test_fim.test_files.test_report_changes.common import generate_string -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, REGULAR, callback_disk_quota_limit_reached, generate_params, create_file, \ - check_time_travel +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test +from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import callback_disk_quota_limit_reached +from wazuh_testing.modules.fim.utils import generate_params, create_file +from test_fim.common import generate_string # Marks pytestmark = [pytest.mark.tier(level=1)] # Variables - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directories = [os.path.join(PREFIX, 'testdir1')] directory_str = ','.join(test_directories) @@ -91,8 +91,7 @@ 'FILE_SIZE_ENABLED': 'no', 'FILE_SIZE_LIMIT': '1KB', 'DISK_QUOTA_ENABLED': 'no', - 'DISK_QUOTA_LIMIT': '2KB', - 'MODULE_NAME': __name__}) + 'DISK_QUOTA_LIMIT': '2KB'}) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) @@ -106,15 +105,9 @@ def get_configuration(request): # Tests - -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_diff'} -]) -@pytest.mark.parametrize('filename, folder, size', [ - ('regular_0', testdir1, 10000000), -]) -def test_disk_quota_disabled(tags_to_apply, filename, folder, size, get_configuration, configure_environment, - restart_syscheckd, wait_for_fim_start): +@pytest.mark.parametrize('filename, folder, size', [('regular_0', testdir1, 10000000)]) +def test_disk_quota_disabled(filename, folder, size, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of the folder where the data used to perform the 'diff' operations is stored when the 'disk_quota' option is disabled. @@ -123,14 +116,11 @@ def test_disk_quota_disabled(tags_to_apply, filename, folder, size, get_configur 'disk_quota' limit. Finally, the test will verify that the FIM event related to the reached disk quota has not been generated. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - filename: type: str brief: Name of the testing file to be created. @@ -146,6 +136,9 @@ def test_disk_quota_disabled(tags_to_apply, filename, folder, size, get_configur - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -168,13 +161,8 @@ def test_disk_quota_disabled(tags_to_apply, filename, folder, size, get_configur - disk_quota - scheduled ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' - to_write = generate_string(size, '0') create_file(REGULAR, folder, filename, content=to_write) - check_time_travel(scheduled) - with pytest.raises(TimeoutError): wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_disk_quota_limit_reached) diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_default.py b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_default.py index 7cc0260ae4..8739367819 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_default.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_default.py @@ -63,21 +63,23 @@ import os import pytest -from test_fim.test_files.test_report_changes.common import generate_string, translate_size, make_diff_file_path, \ - disable_rt_delay, restore_rt_delay -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, REGULAR, callback_file_size_limit_reached, generate_params, create_file, \ - check_time_travel, callback_detect_event, modify_file_content +from test_fim.common import generate_string, translate_size, make_diff_file_path +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.file import create_file, modify_file_content +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (callback_detect_event, CB_FILE_SIZE_LIMIT_REACHED, + ERR_MSG_FIM_EVENT_NOT_DETECTED, ERR_MSG_FILE_LIMIT_REACHED) +from wazuh_testing.modules.fim.utils import generate_params + # Marks pytestmark = [pytest.mark.tier(level=1)] # Variables - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directories = [os.path.join(PREFIX, 'testdir1')] directory_str = ','.join(test_directories) @@ -88,8 +90,7 @@ # Configurations conf_params, conf_metadata = generate_params(extra_params={'REPORT_CHANGES': {'report_changes': 'yes'}, - 'TEST_DIRECTORIES': directory_str, - 'MODULE_NAME': __name__}) + 'TEST_DIRECTORIES': directory_str}) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) @@ -102,32 +103,10 @@ def get_configuration(request): return request.param -# Functions - -def extra_configuration_before_yield(): - """ - Disable syscheck.rt_delay internal option - """ - disable_rt_delay() - - -def extra_configuration_after_yield(): - """ - Restore syscheck.rt_delay internal option - """ - restore_rt_delay() - - # Tests - -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_diff_default'} -]) -@pytest.mark.parametrize('filename, folder', [ - ('regular_0', testdir1), -]) -def test_file_size_default(tags_to_apply, filename, folder, get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): +@pytest.mark.parametrize('filename, folder', [('regular_0', testdir1)]) +def test_file_size_default(filename, folder, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of the monitored file to generate 'diff' information from the default value of the 'file_size' option. For this purpose, @@ -137,14 +116,11 @@ def test_file_size_default(tags_to_apply, filename, folder, get_configuration, c reached file size limit has been generated, and the compressed file in the 'queue/diff/local' directory does not exist. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - filename: type: str brief: Name of the testing file to be created. @@ -157,6 +133,9 @@ def test_file_size_default(tags_to_apply, filename, folder, get_configuration, c - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -179,10 +158,8 @@ def test_file_size_default(tags_to_apply, filename, folder, get_configuration, c tags: - diff - scheduled - - time_travel ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' + size_limit = translate_size('50MB') diff_file_path = make_diff_file_path(folder=folder, filename=filename) @@ -190,9 +167,8 @@ def test_file_size_default(tags_to_apply, filename, folder, get_configuration, c to_write = generate_string(int(size_limit / 10), '0') create_file(REGULAR, folder, filename, content=to_write) - check_time_travel(scheduled) wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event, - error_message='Did not receive expected "Sending FIM event: ..." event.') + error_message=ERR_MSG_FIM_EVENT_NOT_DETECTED) if not os.path.exists(diff_file_path): pytest.raises(FileNotFoundError(f"{diff_file_path} not found. It should exist before increasing the size.")) @@ -201,13 +177,9 @@ def test_file_size_default(tags_to_apply, filename, folder, get_configuration, c to_write = generate_string(size_limit, '0') modify_file_content(folder, filename, new_content=to_write * 3) - check_time_travel(scheduled) - - wazuh_log_monitor.start( - timeout=global_parameters.default_timeout, - callback=callback_file_size_limit_reached, - error_message='Did not receive expected ' - '"File ... is too big for configured maximum size to perform diff operation" event.') + wazuh_log_monitor.start(timeout=global_parameters.default_timeout*3, + callback=generate_monitoring_callback(CB_FILE_SIZE_LIMIT_REACHED), + error_message=ERR_MSG_FILE_LIMIT_REACHED) if os.path.exists(diff_file_path): pytest.raises(FileExistsError(f"{diff_file_path} found. It should not exist after incresing the size.")) diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_disabled.py b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_disabled.py index f4c3341320..14256b783b 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_disabled.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_disabled.py @@ -63,20 +63,21 @@ import os import pytest -from test_fim.test_files.test_report_changes.common import generate_string -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, REGULAR, callback_file_size_limit_reached, generate_params, create_file, \ - check_time_travel from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import CB_FILE_SIZE_LIMIT_REACHED +from wazuh_testing.modules.fim.utils import generate_params, create_file +from test_fim.common import generate_string -# Marks +# Marks pytestmark = [pytest.mark.tier(level=1)] -# Variables +# Variables wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_directories = [os.path.join(PREFIX, 'testdir1')] directory_str = ','.join(test_directories) @@ -91,8 +92,7 @@ 'FILE_SIZE_ENABLED': 'no', 'FILE_SIZE_LIMIT': '1KB', 'DISK_QUOTA_ENABLED': 'yes', - 'DISK_QUOTA_LIMIT': '2KB', - 'MODULE_NAME': __name__}) + 'DISK_QUOTA_LIMIT': '2KB'}) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) @@ -107,14 +107,9 @@ def get_configuration(request): # Tests -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_diff'} -]) -@pytest.mark.parametrize('filename, folder, size', [ - ('regular_0', testdir1, 1000000), -]) -def test_file_size_disabled(tags_to_apply, filename, folder, size, get_configuration, configure_environment, - restart_syscheckd, wait_for_fim_start): +@pytest.mark.parametrize('filename, folder, size', [('regular_0', testdir1, 1000000)]) +def test_file_size_disabled(filename, folder, size, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of the monitored file to generate 'diff' information when the 'file_size' option is disabled. For this purpose, the test @@ -122,14 +117,11 @@ def test_file_size_disabled(tags_to_apply, filename, folder, size, get_configura 'file_size' tag. Finally, the test will verify that the FIM event related to the reached file size limit has not been generated. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - filename: type: str brief: Name of the testing file to be created. @@ -145,6 +137,9 @@ def test_file_size_disabled(tags_to_apply, filename, folder, size, get_configura - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -166,15 +161,10 @@ def test_file_size_disabled(tags_to_apply, filename, folder, size, get_configura tags: - diff - scheduled - - time_travel ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' - to_write = generate_string(size, '0') create_file(REGULAR, folder, filename, content=to_write) - check_time_travel(scheduled) - with pytest.raises(TimeoutError): - wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_file_size_limit_reached) + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_FILE_SIZE_LIMIT_REACHED)) diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_values.py b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_values.py index 9ea0bbeaf1..adacce9eac 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_file_size_values.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_file_size_values.py @@ -63,15 +63,18 @@ import os import pytest -from test_fim.test_files.test_report_changes.common import generate_string, translate_size, disable_file_max_size, \ - restore_file_max_size, make_diff_file_path, disable_rt_delay, restore_rt_delay -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, REGULAR, callback_file_size_limit_reached, generate_params, create_file, \ - check_time_travel, callback_detect_event, modify_file_content, callback_deleted_diff_folder +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test -from wazuh_testing.tools.monitoring import FileMonitor - +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file, modify_file_content +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules.fim.event_monitor import (CB_FILE_SIZE_LIMIT_REACHED, CB_DIFF_FOLDER_DELETED, + ERR_MSG_FIM_EVENT_NOT_DETECTED, ERR_MSG_FILE_LIMIT_REACHED, + ERR_MSG_FOLDER_DELETED, callback_detect_event) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.utils import generate_params +from test_fim.common import (generate_string, translate_size, disable_file_max_size, restore_file_max_size, + make_diff_file_path, disable_rt_delay, restore_rt_delay) # Marks pytestmark = [pytest.mark.tier(level=1)] @@ -93,8 +96,7 @@ 'TEST_DIRECTORIES': directory_str, 'FILE_SIZE_ENABLED': 'yes', 'DISK_QUOTA_ENABLED': 'no', - 'DISK_QUOTA_LIMIT': '2KB', - 'MODULE_NAME': __name__}, + 'DISK_QUOTA_LIMIT': '2KB'}, apply_to_all=({'FILE_SIZE_LIMIT': file_size_elem} for file_size_elem in file_size_values)) @@ -129,15 +131,9 @@ def extra_configuration_after_yield(): # Tests -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_diff'} -]) -@pytest.mark.parametrize('filename, folder', [ - ('regular_0', testdir1), -]) -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_file_size_values(tags_to_apply, filename, folder, get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): +@pytest.mark.parametrize('filename, folder', [('regular_0', testdir1)]) +def test_file_size_values(filename, folder, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon limits the size of the monitored file to generate 'diff' information from the limit set in the 'file_size' tag. For this purpose, the test @@ -147,14 +143,11 @@ def test_file_size_values(tags_to_apply, filename, folder, get_configuration, co file size limit has been generated, and the compressed file in the 'queue/diff/local' directory does not exist. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - filename: type: str brief: Name of the testing file to be created. @@ -167,9 +160,15 @@ def test_file_size_values(tags_to_apply, filename, folder, get_configuration, co - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. + - wait_for_fim_start: + type: fixture + brief: Wait for realtime start, whodata start, or end of initial FIM scan. assertions: - Verify that the 'diff' folder is created when a monitored file does not exceed the size limit. @@ -191,7 +190,6 @@ def test_file_size_values(tags_to_apply, filename, folder, get_configuration, co - scheduled - time_travel ''' - check_apply_test(tags_to_apply, get_configuration['tags']) scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' size_limit = translate_size(get_configuration['metadata']['file_size_limit']) diff_file_path = make_diff_file_path(folder=folder, filename=filename) @@ -200,10 +198,8 @@ def test_file_size_values(tags_to_apply, filename, folder, get_configuration, co to_write = generate_string(int(size_limit / 2), '0') create_file(REGULAR, folder, filename, content=to_write) - check_time_travel(scheduled) - wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event, - error_message='Did not receive expected "Sending FIM event: ..." event.') + error_message=ERR_MSG_FIM_EVENT_NOT_DETECTED) if not os.path.exists(diff_file_path): raise FileNotFoundError(f"{diff_file_path} not found. It should exist before increasing the size.") @@ -212,16 +208,13 @@ def test_file_size_values(tags_to_apply, filename, folder, get_configuration, co to_write = generate_string(size_limit, '0') modify_file_content(folder, filename, new_content=to_write * 3) - check_time_travel(scheduled) - - wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_deleted_diff_folder, - error_message='Did not receive expected "Folder ... has been deleted." event.') + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_DIFF_FOLDER_DELETED), + error_message=ERR_MSG_FOLDER_DELETED) if os.path.exists(diff_file_path): raise FileExistsError(f"{diff_file_path} found. It should not exist after incresing the size.") - wazuh_log_monitor.start( - timeout=global_parameters.default_timeout, - callback=callback_file_size_limit_reached, - error_message='Did not receive expected ' - '"File ... is too big for configured maximum size to perform diff operation" event.') + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_FILE_SIZE_LIMIT_REACHED), + error_message=ERR_MSG_FILE_LIMIT_REACHED) diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_large_changes.py b/tests/integration/test_fim/test_files/test_report_changes/test_large_changes.py index b6f75fd340..bdc6c3c619 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_large_changes.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_large_changes.py @@ -66,26 +66,27 @@ ''' import gzip import os -import re import shutil import subprocess import sys import pytest -from test_fim.test_files.test_report_changes.common import generate_string -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, callback_detect_event, REGULAR, create_file, \ - generate_params, check_time_travel -from wazuh_testing.tools import PREFIX, WAZUH_PATH -from wazuh_testing.tools.configuration import check_apply_test, load_wazuh_configurations +from wazuh_testing.tools import PREFIX +from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS +from wazuh_testing.modules.fim.event_monitor import callback_detect_event +from wazuh_testing.modules.fim.utils import create_file, generate_params +from test_fim.common import generate_string, make_diff_file_path # Marks pytestmark = pytest.mark.tier(level=1) -# variables +# variables +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS test_directories = [os.path.join(PREFIX, 'testdir')] nodiff_file = os.path.join(PREFIX, 'testdir_nodiff', 'regular_file') directory_str = ','.join(test_directories) @@ -100,8 +101,7 @@ conf_params, conf_metadata = generate_params(extra_params={'REPORT_CHANGES': {'report_changes': 'yes'}, 'TEST_DIRECTORIES': directory_str, - 'NODIFF_FILE': nodiff_file, - 'MODULE_NAME': __name__}) + 'NODIFF_FILE': nodiff_file}) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) @@ -127,10 +127,7 @@ def extra_configuration_after_yield(): # Tests - -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_report'} -]) +@pytest.mark.skip('Test skipped for flaky behavior, after it is fixed by Issue wazuh/wazuh#3783, it will be unblocked') @pytest.mark.parametrize('filename, folder, original_size, modified_size', [ ('regular_0', testdir, 500, 500), ('regular_1', testdir, 30000, 30000), @@ -140,9 +137,8 @@ def extra_configuration_after_yield(): ('regular_5', testdir, 20000, 10), ('regular_6', testdir, 70000, 10), ]) -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_large_changes(filename, folder, original_size, modified_size, tags_to_apply, get_configuration, - configure_environment, restart_syscheckd, wait_for_fim_start): +def test_large_changes(filename, folder, original_size, modified_size, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon detects the character limit in the file changes is reached showing the 'More changes' tag in the 'content_changes' field of the generated events. For this @@ -168,15 +164,15 @@ def test_large_changes(filename, folder, original_size, modified_size, tags_to_a - modified_size: type: int brief: Size of the testing file in bytes after being modified. - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - get_configuration: type: fixture brief: Get configurations from the module. - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -205,27 +201,19 @@ def test_large_changes(filename, folder, original_size, modified_size, tags_to_a tags: - diff - scheduled - - time_travel ''' limit = 59391 has_more_changes = False original_file = os.path.join(folder, filename) unzip_diff_file = os.path.join(unzip_diff_dir, filename + '-old') - diff_file_path = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - if sys.platform == 'win32': - diff_file_path = os.path.join(diff_file_path, 'c') - diff_file_path = os.path.join(diff_file_path, re.match(r'^[a-zA-Z]:(\\){1,2}(\w+)(\\){0,2}$', folder).group(2), - filename, 'last-entry.gz') - else: - diff_file_path = os.path.join(diff_file_path, folder.strip('/'), filename, 'last-entry.gz') + diff_file_path = make_diff_file_path(folder, filename) - check_apply_test(tags_to_apply, get_configuration['tags']) fim_mode = get_configuration['metadata']['fim_mode'] # Create the file and and capture the event. original_string = generate_string(original_size, '0') create_file(REGULAR, folder, filename, content=original_string) - check_time_travel(fim_mode == 'scheduled', monitor=wazuh_log_monitor) + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event).result() # Store uncompressed diff file in backup folder @@ -236,7 +224,7 @@ def test_large_changes(filename, folder, original_size, modified_size, tags_to_a # Modify the file with new content modified_string = generate_string(modified_size, '1') create_file(REGULAR, folder, filename, content=modified_string) - check_time_travel(fim_mode == 'scheduled', monitor=wazuh_log_monitor) + event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event).result() # Run the diff/fc command and get the output length diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_report_changes_and_diff.py b/tests/integration/test_fim/test_files/test_report_changes/test_report_changes_and_diff.py index fa8d4ba7e9..97226ce6bb 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_report_changes_and_diff.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_report_changes_and_diff.py @@ -28,7 +28,6 @@ os_platform: - linux - - windows os_version: - Arch Linux @@ -66,67 +65,54 @@ - fim_report_changes ''' import os -import re import sys import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (CHECK_ALL, LOG_FILE_PATH, regular_file_cud, WAZUH_PATH, generate_params) -from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test +from wazuh_testing.tools import PREFIX, configuration from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing import global_parameters, LOG_FILE_PATH +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.utils import regular_file_cud +from test_fim.common import make_diff_file_path + # Marks +pytestmark = [pytest.mark.linux, pytest.mark.win32, pytest.mark.tier(level=1)] -pytestmark = pytest.mark.tier(level=1) +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') -# variables -test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_report_changes_and_diff.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_report_changes_and_diff.yaml') + +# variables test_directories = [os.path.join(PREFIX, 'testdir_reports'), os.path.join(PREFIX, 'testdir_nodiff')] nodiff_file = os.path.join(PREFIX, 'testdir_nodiff', 'regular_file') directory_str = ','.join(test_directories) testdir_reports, testdir_nodiff = test_directories -configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') -options = {CHECK_ALL} - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) -# configurations - -conf_params, conf_metadata = generate_params(extra_params={'REPORT_CHANGES': {'report_changes': 'yes'}, - 'TEST_DIRECTORIES': directory_str, - 'NODIFF_FILE': nodiff_file, - 'MODULE_NAME': __name__}) - -configurations = load_wazuh_configurations(configurations_path, __name__, - params=conf_params, - metadata=conf_metadata - ) - -# fixtures - -@pytest.fixture(scope='module', params=configurations) -def get_configuration(request): - """Get configurations from the module.""" - return request.param +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = configuration.get_test_cases_data(test_cases_path) +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['TEST_DIRECTORIES'] = directory_str + configuration_parameters[count]['NODIFF_FILE'] = nodiff_file +configurations = configuration.load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) # tests - -@pytest.mark.parametrize('tags_to_apply', [ - {'ossec_conf_report'} -]) -@pytest.mark.parametrize('folder, checkers', [ - (testdir_reports, options), - (testdir_nodiff, options) -]) -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_reports_file_and_nodiff(folder, checkers, tags_to_apply, - get_configuration, configure_environment, - restart_syscheckd, wait_for_fim_start): +@pytest.mark.parametrize('test_folders', [test_directories], scope="module", ids='') +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +def test_reports_file_and_nodiff(configuration, metadata, set_wazuh_configuration, + configure_local_internal_options_function, restart_syscheck_function, + create_monitored_folders_module, wait_syscheck_start): ''' description: Check if the 'wazuh-syscheckd' daemon reports the file changes (or truncates if required) in the generated events using the 'nodiff' tag and vice versa. For this purpose, the test @@ -136,32 +122,32 @@ def test_reports_file_and_nodiff(folder, checkers, tags_to_apply, 'content_changes' field a message indicating that 'diff' is truncated because the 'nodiff' option is used. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - folder: - type: str - brief: Path to the directory where the testing files will be created. - - checkers: + - configuration: type: dict - brief: Syscheck 'check_' fields to be generated. - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - - get_configuration: + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - set_wazuh_configuration: + type: fixture + brief: Set ossec.conf configuration. + - configure_local_internal_options_function: type: fixture - brief: Get configurations from the module. - - configure_environment: + brief: Set local_internal_options.conf file. + - restart_syscheck_function: type: fixture - brief: Configure a custom environment for testing. - - restart_syscheckd: + brief: restart syscheckd daemon, and truncate the ossec.log. + - create_monitored_folders_module type: fixture - brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start: + brief: Create folders to be monitored, delete after test. + - wait_syscheck_start: type: fixture - brief: Wait for realtime start, whodata start, or end of initial FIM scan. + brief: check that the starting fim scan is detected. assertions: - Verify that for each modified file a 'diff' file is generated. @@ -181,23 +167,16 @@ def test_reports_file_and_nodiff(folder, checkers, tags_to_apply, tags: - diff - scheduled - - time_travel ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - - file_list = ['regular_file'] - is_truncated = folder == testdir_nodiff + file_list = [f"regular_file"] + is_truncated = metadata['folder'] == 'testdir_nodiff' + folder = os.path.join(PREFIX, metadata['folder']) + escaped = True if sys.platform == 'win32' else False def report_changes_validator(event): """Validate content_changes attribute exists in the event""" for file in file_list: - diff_file = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - if sys.platform == 'win32': - diff_file = os.path.join(diff_file, 'c') - diff_file = os.path.join(diff_file, re.match(r'^[a-zA-Z]:(\\){1,2}(\w+)(\\){0,2}$', folder).group(2), - file) - else: - diff_file = os.path.join(diff_file, folder.strip('/'), file) + diff_file = make_diff_file_path(folder, file) assert os.path.exists(diff_file), f'{diff_file} does not exist' assert event['data'].get('content_changes') is not None, f'content_changes is empty' @@ -210,7 +189,7 @@ def no_diff_validator(event): assert '' not in event['data'].get('content_changes'), \ f'content_changes is truncated' - regular_file_cud(folder, wazuh_log_monitor, file_list=file_list, - time_travel=get_configuration['metadata']['fim_mode'] == 'scheduled', - min_timeout=global_parameters.default_timeout, triggers_event=True, - validators_after_update=[report_changes_validator, no_diff_validator]) + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + regular_file_cud(folder, wazuh_log_monitor, file_list=file_list, min_timeout=global_parameters.default_timeout*4, + triggers_event=True, validators_after_update=[report_changes_validator, no_diff_validator], + escaped=escaped) diff --git a/tests/integration/test_fim/test_files/test_report_changes/test_report_deleted_diff.py b/tests/integration/test_fim/test_files/test_report_changes/test_report_deleted_diff.py index c608ba4026..7c2d25a2e7 100644 --- a/tests/integration/test_fim/test_files/test_report_changes/test_report_deleted_diff.py +++ b/tests/integration/test_fim/test_files/test_report_changes/test_report_deleted_diff.py @@ -68,20 +68,23 @@ import time import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, WAZUH_PATH, callback_detect_event, - REGULAR, create_file, generate_params, detect_initial_scan, check_time_travel) +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import get_wazuh_conf, set_section_wazuh_conf, load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.services import restart_wazuh_with_new_conf +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS +from wazuh_testing.modules.fim.utils import generate_params +from wazuh_testing.modules.fim.event_monitor import detect_initial_scan, callback_detect_event +from test_fim.common import make_diff_file_path # Marks pytestmark = pytest.mark.tier(level=1) # variables - +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') @@ -100,13 +103,9 @@ def change_conf(report_value): """"Return a new ossec configuration with a changed report_value""" conf_params, conf_metadata = generate_params(extra_params={'REPORT_CHANGES': {'report_changes': report_value}, 'TEST_DIRECTORIES': directory_str, - 'NODIFF_FILE': nodiff_file, - 'MODULE_NAME': __name__}) + 'NODIFF_FILE': nodiff_file}) - return load_wazuh_configurations(configurations_path, __name__, - params=conf_params, - metadata=conf_metadata - ) + return load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) configurations = change_conf('yes') @@ -146,14 +145,13 @@ def wait_for_event(fim_mode): fim_mode : str FIM mode (scheduled, realtime, whodata) """ - check_time_travel(time_travel=fim_mode == 'scheduled', monitor=wazuh_log_monitor) # Wait until event is detected wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event, error_message='Did not receive expected "Sending FIM event: ..." event') -def create_and_check_diff(name, path, fim_mode): +def create_file_and_check_diff(name, path, fim_mode): """Create a file and check if it is duplicated in diff directory. Parameters @@ -171,13 +169,11 @@ def create_and_check_diff(name, path, fim_mode): String with the duplicated file path (diff) """ create_file(REGULAR, path, name, content='Sample content') - wait_for_event(fim_mode) - diff_file = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - if sys.platform == 'win32': - diff_file = os.path.join(diff_file, 'c') - diff_file = os.path.join(diff_file, re.match(r'^[a-zA-Z]:(\\){1,2}(\w+)(\\){0,2}$', path).group(2), name) - else: - diff_file = os.path.join(diff_file, path.strip('/'), name) + + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_event, + error_message='Did not receive expected "Sending FIM event: ..." event') + + diff_file = make_diff_file_path(path, name) assert os.path.exists(diff_file), f'{diff_file} does not exist' return diff_file @@ -186,6 +182,7 @@ def disable_report_changes(fim_mode): """Change the `report_changes` value in the `ossec.conf` file and then restart `Syscheck` to apply the changes.""" new_conf = change_conf(report_value='no') new_ossec_conf = set_section_wazuh_conf(new_conf[0].get('sections')) + restart_wazuh_with_new_conf(new_ossec_conf) # Wait for FIM scan to finish detect_fim_scan(wazuh_log_monitor, fim_mode) @@ -194,8 +191,8 @@ def disable_report_changes(fim_mode): # tests @pytest.mark.parametrize('path', [testdir_nodiff]) -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_report_when_deleted_directories(path, get_configuration, configure_environment, restart_syscheckd, +def test_report_when_deleted_directories(path, get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon deletes the 'diff' folder created in the 'queue/diff/local' @@ -205,7 +202,7 @@ def test_report_when_deleted_directories(path, get_configuration, configure_envi will remove the monitored folder, wait for the FIM 'deleted' event, and verify that the corresponding 'diff' folder is deleted. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -219,6 +216,9 @@ def test_report_when_deleted_directories(path, get_configuration, configure_envi - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -246,97 +246,29 @@ def test_report_when_deleted_directories(path, get_configuration, configure_envi - time_travel ''' fim_mode = get_configuration['metadata']['fim_mode'] - diff_dir = os.path.join(WAZUH_PATH, 'queue', 'diff', 'local') - - if sys.platform == 'win32': - diff_dir = os.path.join(diff_dir, 'c') - diff_dir = os.path.join(diff_dir, re.match(r'^[a-zA-Z]:(\\){1,2}(\w+)(\\){0,2}$', path).group(2), FILE_NAME) - else: - diff_dir = os.path.join(diff_dir, path.strip('/'), FILE_NAME) - create_and_check_diff(FILE_NAME, path, fim_mode) + diff_dir = create_file_and_check_diff(FILE_NAME, path, fim_mode) shutil.rmtree(path, ignore_errors=True) wait_for_event(fim_mode) + # Wait a second so diff path is deleted + if 'scheduled' not in fim_mode: + time.sleep(2) assert not os.path.exists(diff_dir), f'{diff_dir} exists' -@pytest.mark.parametrize('path', [testdir_reports]) -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_no_report_changes(path, get_configuration, configure_environment, - restart_syscheckd, wait_for_fim_start): +def test_report_changes_after_restart(get_configuration, configure_environment, + configure_local_internal_options_module, restart_syscheckd, wait_for_fim_start): ''' - description: Check if the 'wazuh-syscheckd' daemon deletes the 'diff' folder created in the 'queue/diff/local' - directory when disabling the 'report_changes' option. For this purpose, the test will monitor - a directory and add a testing file inside it. Then, it will check if a 'diff' file is created - for the modified testing file. Next, the test will backup the main configuration, disable - the 'report_changes' option, and check if the diff folder has been deleted. Finally, the test - will restore the backed configuration and verify that the initial scan of FIM scan is made. - - wazuh_min_version: 4.2.0 - - tier: 1 - - parameters: - - path: - type: str - brief: Path to the testing file to be created. - - get_configuration: - type: fixture - brief: Get configurations from the module. - - configure_environment: - type: fixture - brief: Configure a custom environment for testing. - - restart_syscheckd: - type: fixture - brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start: - type: fixture - brief: Wait for realtime start, whodata start, or end of initial FIM scan. - - assertions: - - Verify that FIM adds the 'diff' file in the 'queue/diff/local' directory - when monitoring the corresponding testing file. - - Verify that FIM deletes the 'diff' folder in the 'queue/diff/local' directory - when disabling the 'report_changes' option. - - input_description: Different test cases are contained in external YAML file (wazuh_conf.yaml) which - includes configuration settings for the 'wazuh-syscheckd' daemon and, these - are combined with the testing directory to be monitored defined in the module. - - expected_output: - - r'.*Sending FIM event: (.+)$' ('added' events) - - tags: - - diff - - scheduled - - time_travel - ''' - fim_mode = get_configuration['metadata']['fim_mode'] - diff_file = create_and_check_diff(FILE_NAME, path, fim_mode) - backup_conf = get_wazuh_conf() - - try: - disable_report_changes(fim_mode) - assert not os.path.exists(diff_file), f'{diff_file} exists' - finally: - # Restore the original conf file so as not to interfere with other tests - restart_wazuh_with_new_conf(backup_conf) - detect_fim_scan(wazuh_log_monitor, fim_mode) - -@pytest.mark.skip(reason="It will be blocked by wazuh/wazuh#9298, when it was solve we can enable again this test") -def test_report_changes_after_restart(get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): - ''' - description: Check if the 'wazuh-syscheckd' daemon deletes the 'diff' folder created in the 'queue/diff/local' + description: Check if the 'wazuh-syscheckd' daemon deletes the 'diff' folder created in the 'queue/diff/file' directory when restarting that daemon, and the 'report_changes' option is disabled. For this purpose, the test will monitor a directory and add a testing file inside it. Then, it will check - if a 'diff' file is created for the modified testing file. The folders in the 'queue/diff/local' + if a 'diff' file is created for the modified testing file. The folders in the 'queue/diff/file' directory will be deleted after the 'wazuh-syscheckd' daemon restart but will be created again if the 'report_changes' option is still active. To avoid this, the test will disable the 'report_changes' option (backing the main configuration) before restarting the 'wazuh-syscheckd' daemon to ensure that the directories will not be created again. Finally, the test will restore the backed configuration and verify that the initial scan of FIM is made. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -347,6 +279,9 @@ def test_report_changes_after_restart(get_configuration, configure_environment, - configure_environment: type: fixture brief: Configure a custom environment for testing. + - configure_local_internal_options_module: + type: fixture + brief: Configure the local internal options file. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -355,9 +290,9 @@ def test_report_changes_after_restart(get_configuration, configure_environment, brief: Wait for realtime start, whodata start, or end of initial FIM scan. assertions: - - Verify that FIM adds the 'diff' file in the 'queue/diff/local' directory + - Verify that FIM adds the 'diff' file in the 'queue/diff/file' directory when monitoring the corresponding testing file. - - Verify that FIM deletes the 'diff' folder in the 'queue/diff/local' directory + - Verify that FIM deletes the 'diff' folder in the 'queue/diff/file' directory when restarting the disabling the 'report_changes' option is disabled. input_description: Different test cases are contained in external YAML file (wazuh_conf.yaml) which @@ -375,11 +310,13 @@ def test_report_changes_after_restart(get_configuration, configure_environment, fim_mode = get_configuration['metadata']['fim_mode'] # Create a file in the monitored path to force the creation of a report in diff - diff_file_path = create_and_check_diff(FILE_NAME, testdir_reports, fim_mode) + diff_file_path = create_file_and_check_diff(FILE_NAME, testdir_reports, fim_mode) backup_conf = get_wazuh_conf() try: disable_report_changes(fim_mode) + # Wait FIM to complete start and delete the diff_file + time.sleep(5) assert not os.path.exists(diff_file_path), f'{diff_file_path} exists' finally: # Restore the original conf file so as not to interfere with other tests diff --git a/tests/integration/test_fim/test_files/test_restrict/data/wazuh_conf.yaml b/tests/integration/test_fim/test_files/test_restrict/data/wazuh_conf.yaml index 62f163497e..cca1d31b2f 100644 --- a/tests/integration/test_fim/test_files/test_restrict/data/wazuh_conf.yaml +++ b/tests/integration/test_fim/test_files/test_restrict/data/wazuh_conf.yaml @@ -1,189 +1,200 @@ ---- # conf 1 - tags: - - valid_empty + - valid_empty apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: "" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: "" + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 2 - tags: - - valid_regex - - valid_regex1 + - valid_regex + - valid_regex1 apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: ".restricted$" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: .restricted$ + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 3 - tags: - - valid_regex - - valid_regex2 + - valid_regex + - valid_regex2 apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: "^restricted" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: ^restricted + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 4 - tags: - - valid_regex - - valid_regex3 + - valid_regex + - valid_regex3 apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: "filerestricted|other_restricted$" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: filerestricted|other_restricted$ + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 5 - tags: - - valid_regex_incomplete_unix + - valid_regex_incomplete_unix apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: "^/testdir1/f|^/testdir1/subdir/f|^/testdir2/f|^/testdir2/subdir/f" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: ^/testdir1/f|^/testdir1/subdir/f + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' # conf 6 - tags: - - valid_regex_incomplete_win + - valid_regex_incomplete_win apply_to_modules: - - test_restrict_valid + - test_restrict_valid sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - check_all: 'yes' - - FIM_MODE - - restrict: "^c:\\testdir1\\f|^c:\\testdir1\\subdir\\f|^c:\\testdir2\\f|^c:\\testdir2\\subdir\\f" - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 2 + - directories: + value: TEST_DIRECTORIES + attributes: + - check_all: 'yes' + - FIM_MODE + - restrict: ^c:\testdir1\f|^c:\testdir1\subdir\f + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_restrict/test_restrict_valid.py b/tests/integration/test_fim/test_files/test_restrict/test_restrict_valid.py index 5e7aad61b6..9bbc21105e 100644 --- a/tests/integration/test_fim/test_files/test_restrict/test_restrict_valid.py +++ b/tests/integration/test_fim/test_files/test_restrict/test_restrict_valid.py @@ -61,46 +61,35 @@ ''' import os import sys - import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, callback_detect_event, callback_restricted, create_file, \ - REGULAR, generate_params, check_time_travel +from time import sleep + +from wazuh_testing import global_parameters, REGULAR, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.file import create_file +from wazuh_testing.modules.fim.event_monitor import callback_detect_file_added_event, callback_restricted +from wazuh_testing.modules.fim.utils import generate_params -# Marks +# Marks pytestmark = pytest.mark.tier(level=1) -# variables - +# Variables test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') test_directories = [os.path.join(PREFIX, 'testdir1'), - os.path.join(PREFIX, 'testdir1', 'subdir'), - os.path.join(PREFIX, 'testdir2'), - os.path.join(PREFIX, 'testdir2', 'subdir') + os.path.join(PREFIX, 'testdir1', 'subdir') ] -testdir1, testdir1_sub, testdir2, testdir2_sub = test_directories - -directory_str = ','.join([os.path.join(PREFIX, 'testdir1'), os.path.join(PREFIX, 'testdir2')]) - wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) -# configurations - -conf_params, conf_metadata = generate_params(extra_params={'TEST_DIRECTORIES': directory_str}) - -configurations = load_wazuh_configurations(configurations_path, __name__, - params=conf_params, - metadata=conf_metadata - ) +# Configurations +conf_params, conf_metadata = generate_params(extra_params={'TEST_DIRECTORIES': test_directories[0]}) +configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) -# fixtures - +# Fixtures @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -122,12 +111,10 @@ def get_configuration(request): {f'valid_regex_incomplete_{"win" if sys.platform == "win32" else "unix"}'}), ('fileinfolder1', 'wb', b"Sample content", True, {f'valid_regex_incomplete_{"win" if sys.platform == "win32" else "unix"}'}), - ('testing_regex', 'w', "", False, - {f'valid_regex_incomplete_{"win" if sys.platform == "win32" else "unix"}'}), + ('testing_regex', 'w', "", False, {f'valid_regex_incomplete_{"win" if sys.platform == "win32" else "unix"}'}), ]) -def test_restrict(folder, filename, mode, content, triggers_event, tags_to_apply, - get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): +def test_restrict(folder, filename, mode, content, triggers_event, tags_to_apply, get_configuration, + configure_environment, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon detects or ignores events in monitored files depending on the value set in the 'restrict' attribute. This attribute limit checks to files that match @@ -188,21 +175,19 @@ def test_restrict(folder, filename, mode, content, triggers_event, tags_to_apply tags: - scheduled - - time_travel ''' check_apply_test(tags_to_apply, get_configuration['tags']) # Create text files create_file(REGULAR, folder, filename, content=content) - scheduled = get_configuration['metadata']['fim_mode'] == 'scheduled' # Go ahead in time to let syscheck perform a new scan - check_time_travel(scheduled, monitor=wazuh_log_monitor) + if get_configuration['metadata']['fim_mode'] == 'scheduled': + sleep(3) if triggers_event: event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=callback_detect_event).result() - assert event['data']['type'] == 'added', f'Event type not equal' + callback=callback_detect_file_added_event).result() assert event['data']['path'] == os.path.join(folder, filename), f'Event path not equal' else: while True: diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/data/configuration_template/configuration_whodata_policy_change.yaml b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/configuration_template/configuration_whodata_policy_change.yaml new file mode 100644 index 0000000000..c9ac739786 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/configuration_template/configuration_whodata_policy_change.yaml @@ -0,0 +1,28 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - directories: + value: TEST_DIRECTORIES + attributes: + - whodata: 'yes' + - windows_audit_interval: + value: AUDIT_INTERVAL + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_disable.csv b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_disable.csv new file mode 100644 index 0000000000..a4ac1432a6 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_disable.csv @@ -0,0 +1,66 @@ +Machine Name,Policy Target,Subcategory,Subcategory GUID,Inclusion Setting,Exclusion Setting,Setting Value +WIN2019,System,IPsec Driver,{0CCE9213-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,System Integrity,{0CCE9212-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Security System Extension,{0CCE9211-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security State Change,{0CCE9210-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other System Events,{0CCE9214-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Group Membership,{0CCE9249-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,User / Device Claims,{0CCE9247-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Network Policy Server,{0CCE9243-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Other Logon/Logoff Events,{0CCE921C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Special Logon,{0CCE921B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,IPsec Extended Mode,{0CCE921A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Quick Mode,{0CCE9219-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Main Mode,{0CCE9218-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Account Lockout,{0CCE9217-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logoff,{0CCE9216-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logon,{0CCE9215-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Handle Manipulation,{0CCE9223-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Central Policy Staging,{0CCE9246-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Removable Storage,{0CCE9245-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Detailed File Share,{0CCE9244-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Object Access Events,{0CCE9227-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Connection,{0CCE9226-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Packet Drop,{0CCE9225-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File Share,{0CCE9224-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Generated,{0CCE9222-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Certification Services,{0CCE9221-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,SAM,{0CCE9220-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kernel Object,{0CCE921F-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Registry,{0CCE921E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File System,{0CCE921D-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other Privilege Use Events,{0CCE922A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Non Sensitive Privilege Use,{0CCE9229-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Sensitive Privilege Use,{0CCE9228-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,RPC Events,{0CCE922E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Token Right Adjusted Events,{0CCE924A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Creation,{0CCE922B-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Termination,{0CCE922C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Plug and Play Events,{0CCE9248-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,DPAPI Activity,{0CCE922D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Policy Change Events,{0CCE9234-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authentication Policy Change,{0CCE9230-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Audit Policy Change,{0CCE922F-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Policy Change,{0CCE9233-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authorization Policy Change,{0CCE9231-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,MPSSVC Rule-Level Policy Change,{0CCE9232-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Management Events,{0CCE923A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Group Management,{0CCE9239-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Distribution Group Management,{0CCE9238-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security Group Management,{0CCE9237-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Computer Account Management,{0CCE9236-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,User Account Management,{0CCE9235-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Directory Service Replication,{0CCE923D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Access,{0CCE923B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Detailed Directory Service Replication,{0CCE923E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Changes,{0CCE923C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Logon Events,{0CCE9241-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kerberos Service Ticket Operations,{0CCE9240-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Credential Validation,{0CCE923F-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Kerberos Authentication Service,{0CCE9242-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,,Option:CrashOnAuditFail,,Disabled,,0 +WIN2019,,Option:FullPrivilegeAuditing,,Disabled,,0 +WIN2019,,Option:AuditBaseObjects,,Disabled,,0 +WIN2019,,Option:AuditBaseDirectories,,Disabled,,0 +WIN2019,,FileGlobalSacl,,,, +WIN2019,,RegistryGlobalSacl,,,, diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_enable.csv b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_enable.csv new file mode 100644 index 0000000000..2781dfc3b7 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_enable.csv @@ -0,0 +1,66 @@ +Machine Name,Policy Target,Subcategory,Subcategory GUID,Inclusion Setting,Exclusion Setting,Setting Value +WIN2019,System,IPsec Driver,{0CCE9213-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,System Integrity,{0CCE9212-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Security System Extension,{0CCE9211-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security State Change,{0CCE9210-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other System Events,{0CCE9214-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Group Membership,{0CCE9249-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,User / Device Claims,{0CCE9247-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Network Policy Server,{0CCE9243-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Other Logon/Logoff Events,{0CCE921C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Special Logon,{0CCE921B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,IPsec Extended Mode,{0CCE921A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Quick Mode,{0CCE9219-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Main Mode,{0CCE9218-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Account Lockout,{0CCE9217-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logoff,{0CCE9216-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logon,{0CCE9215-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Handle Manipulation,{0CCE9223-69AE-11D9-BED3-505054503030},Success,,0 +WIN2019,System,Central Policy Staging,{0CCE9246-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Removable Storage,{0CCE9245-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Detailed File Share,{0CCE9244-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Object Access Events,{0CCE9227-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Connection,{0CCE9226-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Packet Drop,{0CCE9225-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File Share,{0CCE9224-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Generated,{0CCE9222-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Certification Services,{0CCE9221-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,SAM,{0CCE9220-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kernel Object,{0CCE921F-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Registry,{0CCE921E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File System,{0CCE921D-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other Privilege Use Events,{0CCE922A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Non Sensitive Privilege Use,{0CCE9229-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Sensitive Privilege Use,{0CCE9228-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,RPC Events,{0CCE922E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Token Right Adjusted Events,{0CCE924A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Creation,{0CCE922B-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Termination,{0CCE922C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Plug and Play Events,{0CCE9248-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,DPAPI Activity,{0CCE922D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Policy Change Events,{0CCE9234-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authentication Policy Change,{0CCE9230-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Audit Policy Change,{0CCE922F-69AE-11D9-BED3-505054503030},Success,,0 +WIN2019,System,Filtering Platform Policy Change,{0CCE9233-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authorization Policy Change,{0CCE9231-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,MPSSVC Rule-Level Policy Change,{0CCE9232-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Management Events,{0CCE923A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Group Management,{0CCE9239-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Distribution Group Management,{0CCE9238-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security Group Management,{0CCE9237-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Computer Account Management,{0CCE9236-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,User Account Management,{0CCE9235-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Directory Service Replication,{0CCE923D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Access,{0CCE923B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Detailed Directory Service Replication,{0CCE923E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Changes,{0CCE923C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Logon Events,{0CCE9241-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kerberos Service Ticket Operations,{0CCE9240-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Credential Validation,{0CCE923F-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Kerberos Authentication Service,{0CCE9242-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,,Option:CrashOnAuditFail,,Disabled,,0 +WIN2019,,Option:FullPrivilegeAuditing,,Disabled,,0 +WIN2019,,Option:AuditBaseObjects,,Disabled,,0 +WIN2019,,Option:AuditBaseDirectories,,Disabled,,0 +WIN2019,,FileGlobalSacl,,,, +WIN2019,,RegistryGlobalSacl,,,, diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_success_removed.csv b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_success_removed.csv new file mode 100644 index 0000000000..f1c4bfe647 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/policy_success_removed.csv @@ -0,0 +1,66 @@ +Machine Name,Policy Target,Subcategory,Subcategory GUID,Inclusion Setting,Exclusion Setting,Setting Value +WIN2019,System,IPsec Driver,{0CCE9213-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,System Integrity,{0CCE9212-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Security System Extension,{0CCE9211-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security State Change,{0CCE9210-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other System Events,{0CCE9214-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Group Membership,{0CCE9249-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,User / Device Claims,{0CCE9247-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Network Policy Server,{0CCE9243-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Other Logon/Logoff Events,{0CCE921C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Special Logon,{0CCE921B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,IPsec Extended Mode,{0CCE921A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Quick Mode,{0CCE9219-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,IPsec Main Mode,{0CCE9218-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Account Lockout,{0CCE9217-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logoff,{0CCE9216-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Logon,{0CCE9215-69AE-11D9-BED3-505054503030},Success and Failure,,3 +WIN2019,System,Handle Manipulation,{0CCE9223-69AE-11D9-BED3-505054503030},Success removed,,0 +WIN2019,System,Central Policy Staging,{0CCE9246-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Removable Storage,{0CCE9245-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Detailed File Share,{0CCE9244-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Object Access Events,{0CCE9227-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Connection,{0CCE9226-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Filtering Platform Packet Drop,{0CCE9225-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File Share,{0CCE9224-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Generated,{0CCE9222-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Certification Services,{0CCE9221-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,SAM,{0CCE9220-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kernel Object,{0CCE921F-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Registry,{0CCE921E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,File System,{0CCE921D-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Other Privilege Use Events,{0CCE922A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Non Sensitive Privilege Use,{0CCE9229-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Sensitive Privilege Use,{0CCE9228-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,RPC Events,{0CCE922E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Token Right Adjusted Events,{0CCE924A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Creation,{0CCE922B-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Process Termination,{0CCE922C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Plug and Play Events,{0CCE9248-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,DPAPI Activity,{0CCE922D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Policy Change Events,{0CCE9234-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authentication Policy Change,{0CCE9230-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Audit Policy Change,{0CCE922F-69AE-11D9-BED3-505054503030},Success removed,,0 +WIN2019,System,Filtering Platform Policy Change,{0CCE9233-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Authorization Policy Change,{0CCE9231-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,MPSSVC Rule-Level Policy Change,{0CCE9232-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Management Events,{0CCE923A-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Application Group Management,{0CCE9239-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Distribution Group Management,{0CCE9238-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Security Group Management,{0CCE9237-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Computer Account Management,{0CCE9236-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,User Account Management,{0CCE9235-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Directory Service Replication,{0CCE923D-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Access,{0CCE923B-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Detailed Directory Service Replication,{0CCE923E-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Directory Service Changes,{0CCE923C-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Other Account Logon Events,{0CCE9241-69AE-11D9-BED3-505054503030},No Auditing,,0 +WIN2019,System,Kerberos Service Ticket Operations,{0CCE9240-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Credential Validation,{0CCE923F-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,System,Kerberos Authentication Service,{0CCE9242-69AE-11D9-BED3-505054503030},Success,,1 +WIN2019,,Option:CrashOnAuditFail,,Disabled,,0 +WIN2019,,Option:FullPrivilegeAuditing,,Disabled,,0 +WIN2019,,Option:AuditBaseObjects,,Disabled,,0 +WIN2019,,Option:AuditBaseDirectories,,Disabled,,0 +WIN2019,,FileGlobalSacl,,,, +WIN2019,,RegistryGlobalSacl,,,, diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/data/test_cases/cases_whodata_policy_change.yaml b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/test_cases/cases_whodata_policy_change.yaml new file mode 100644 index 0000000000..a10eaeadeb --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/data/test_cases/cases_whodata_policy_change.yaml @@ -0,0 +1,17 @@ +- name: audit_policy_change_event_checker + description: Check when folders monitored in whodata mode, if audit policies change, fim changes to realtime mode + configuration_parameters: + AUDIT_INTERVAL: 2 + metadata: + check_event: false + disabling_file: policy_disable.csv + fim_mode: whodata + +- name: audit_policy_change_event_4719 + description: Check when receiving event 4719 for audit policies change, fim changes to realtime mode + configuration_parameters: + AUDIT_INTERVAL: 5 + metadata: + check_event: true + disabling_file: policy_success_removed.csv + fim_mode: whodata diff --git a/tests/integration/test_fim/test_files/test_whodata_policy_change/test_whodata_policy_change.py b/tests/integration/test_fim/test_files/test_whodata_policy_change/test_whodata_policy_change.py new file mode 100644 index 0000000000..f4ac47a8f2 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_whodata_policy_change/test_whodata_policy_change.py @@ -0,0 +1,189 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: File Integrity Monitoring (FIM) system watches selected files in whodata mode, if the policies for those + files change during runtime, the monitoring mode changes to realtime. This tests check that when the + policies change, monitoring continues correctly in realtime and events are detected. + +components: + - FIM + +suite: whodata_policy_change + +targets: + - agent + +daemons: + - wazuh-syscheckd + +os_platform: + - windows + +os_version: + - Windows 10 + - Windows Server 2019 + - Windows Server 2016 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + +pytest_args: + - fim_mode: + whodata: Implies real-time monitoring but adding the 'who-data' information. + - tier: + 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. + 1: Only level 1 tests are performed, they check functionalities of medium complexity. + 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. + +tags: + - fim_whodata_policy_change +''' +import os +import time + +import pytest +from wazuh_testing.tools import PREFIX, configuration +from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.local_actions import run_local_command_returning_output +from wazuh_testing import T_5, T_20, global_parameters, LOG_FILE_PATH +from wazuh_testing.modules import fim +from wazuh_testing.modules.fim import event_monitor as evm +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.utils import regular_file_cud + + +# Marks +pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_whodata_policy_change.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_whodata_policy_change.yaml') + +# Variables +test_folders = [os.path.join(PREFIX, fim.TEST_DIR_1)] +folder = test_folders[0] +file_list = [f"regular_file"] +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) +policies_file = os.path.join(TEST_DATA_PATH, 'policy_enable.csv') + +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = configuration.get_test_cases_data(test_cases_path) +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['TEST_DIRECTORIES'] = folder +configurations = configuration.load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) + + +# Tests +@pytest.mark.parametrize('policies_file', [policies_file], ids='') +@pytest.mark.parametrize('test_folders', [test_folders], ids='', scope='module') +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +def test_whodata_policy_change(configuration, metadata, set_wazuh_configuration, create_monitored_folders_module, + configure_local_internal_options_function, policies_file, restore_win_whodata_policies, + restart_syscheck_function, wait_syscheck_start): + ''' + description: Check if the 'wazuh-syscheckd' is monitoring a in whodata mode in Windows, and the Audit Policies are + changed, the monitoring changes to realtime and works on the monitored files. + + test_phases: + - setup: + - Set wazuh configuration. + - Create target folder to be monitored + - Clean logs files and restart wazuh to apply the configuration. + - test: + - Check that SACL has been configured for monitored folders + - Change windows audit whodata policies + - Check the change has been detected and monitoring changes to realtime mode + - Create, Update and Delete files in the monitored folder and check events are generated in realtime + - teardown: + - Restore windows audit policies + - Delete the monitored folders + - Restore configuration + - Stop wazuh + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - create_monitored_folders_module: + type: fixture + brief: Create the folders that will be monitored, delete them after test. + - set_wazuh_configuration: + type: fixture + brief: Set ossec.conf configuration. + - configure_local_internal_options_function: + type: fixture + brief: Set local_internal_options configuration. + - policies_file: + type: string + brief: path for audit policies file to use on restore_win_whodata_policies fixture + - restore_win_whodata_policies + type: fixture + brief: restores windows audit policies using a given csv file after yield + - restart_syscheck_function: + type: fixture + brief: restart syscheckd daemon, and truncate the ossec.log. + - wait_for_fim_start_function: + type: fixture + brief: check that the starting fim scan is detected. + + assertions: + - Verify the SACL for the monitored files is configured + - Verify Whodata monitoring has started + - Verify that the event 4719 event is detected and changes monitoring to real-time + - Verify the monitoring mode changes to real-time + - Verify monitoring in real-time works correctly for the monitored files. + + input_description: + - The file 'cases_whodata_policy_change.yaml' provides the test cases and specific configuration. + - The file 'configuration_whodata_policy_change.yaml' provides the configuration template to be used. + + expected_output: + - fr".*win_whodata.*The SACL of '({file})' will be configured" + - r'.*win_whodata.*(Event 4719).*Switching directories to realtime' + - fr".*set_whodata_mode_changes.*The '({file})' directory starts to be monitored in real-time mode." + - r'.*Sending FIM event: (.+)$' ('added', 'modified', and 'deleted' events) + + tags: + - whodata + ''' + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + # Check it is being monitored in whodata + evm.detect_windows_sacl_configured(wazuh_log_monitor) + # Check Whodata engine has started + evm.detect_whodata_start(wazuh_log_monitor) + + # Change policies + if metadata['check_event']: + # Wait to allow thread_checker to be executed twice so Event 4719 detection starts. + time.sleep(T_5) + command = f"auditpol /restore /file:{os.path.join(TEST_DATA_PATH,metadata['disabling_file'])}" + output = run_local_command_returning_output(command) + + # Check monitoring changes to realtime + if metadata['check_event']: + evm.check_fim_event(timeout=T_20, callback=evm.CB_RECIEVED_EVENT_4719) + evm.detect_windows_whodata_mode_change(wazuh_log_monitor) + + # Create/Update/Delete file and check events + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + regular_file_cud(folder, wazuh_log_monitor, file_list=file_list, event_mode=fim.REALTIME_MODE, + escaped=True, min_timeout=global_parameters.default_timeout*4, triggers_event=True) diff --git a/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/configuration_template/configuration_windows_system_folder_redirection.yaml b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/configuration_template/configuration_windows_system_folder_redirection.yaml new file mode 100644 index 0000000000..86d9a8db1a --- /dev/null +++ b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/configuration_template/configuration_windows_system_folder_redirection.yaml @@ -0,0 +1,32 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: INTERVAL + - directories: + value: TEST_DIRECTORIES + attributes: + - realtime: REALTIME + - whodata: WHODATA + - recursion_level: 0 + - windows_audit_interval: + value: 500 + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/test_cases/cases_windows_system_folder_redirection.yaml b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/test_cases/cases_windows_system_folder_redirection.yaml new file mode 100644 index 0000000000..b3499c0d13 --- /dev/null +++ b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/data/test_cases/cases_windows_system_folder_redirection.yaml @@ -0,0 +1,116 @@ +- name: monitor /Windows/System32 - scheduled + description: Monitor the System32 folder without redirection in Scheduled mode + configuration_parameters: + INTERVAL: 3 + REALTIME: 'no' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\System32\testdir1' + fim_mode: scheduled + metadata: + folder: system32 + fim_mode: scheduled + redirected: false + +- name: monitor /Windows/System32 - realtime + description: Monitor the System32 folder without redirection in Realtime mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'yes' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\System32\testdir1' + fim_mode: realtime + metadata: + folder: system32 + fim_mode: realtime + redirected: false + +- name: monitor /Windows/System32 - whodata + description: Monitor the System32 folder without redirection in Whodata mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'no' + WHODATA: 'yes' + TEST_DIRECTORIES: '%WINDIR%\System32\testdir1' + fim_mode: whodata + metadata: + folder: system32 + fim_mode: whodata + redirected: false + +- name: monitor /Windows/Sysnative - scheduled + description: Monitor the System32 through Sysnative redirection in Scheduled mode + configuration_parameters: + INTERVAL: 3 + REALTIME: 'no' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\Sysnative\testdir1' + fim_mode: scheduled + metadata: + folder: system32 + fim_mode: scheduled + redirected: true + +- name: monitor /Windows/Sysnative - realtime + description: Monitor the System32 through Sysnative redirection in Realtime mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'yes' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\Sysnative\testdir1' + fim_mode: realtime + metadata: + folder: system32 + fim_mode: realtime + redirected: true + +- name: monitor /Windows/Sysnative - whodata + description: Monitor the System32 through Sysnative redirection in Whodata mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'no' + WHODATA: 'yes' + TEST_DIRECTORIES: '%WINDIR%\Sysnative\testdir1' + fim_mode: whodata + metadata: + folder: system32 + fim_mode: whodata + redirected: true + +- name: monitor SyWOW64 - scheduled + description: Monitor the SysWOW64 without redirection in Scheduled mode + configuration_parameters: + INTERVAL: 3 + REALTIME: 'no' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\SysWOW64\testdir1' + fim_mode: scheduled + metadata: + folder: syswow64 + fim_mode: scheduled + redirected: false + +- name: monitor SysWOW64 - realtime + description: Monitor the SysWOW64 without redirection in Realtime mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'yes' + WHODATA: 'no' + TEST_DIRECTORIES: '%WINDIR%\SysWOW64\testdir1' + fim_mode: realtime + metadata: + folder: syswow64 + fim_mode: realtime + redirected: false + +- name: monitor SysWOW64 - whodata + description: Monitor the SysWOW64 without redirection in Whodata mode + configuration_parameters: + INTERVAL: 10000 + REALTIME: 'no' + WHODATA: 'yes' + TEST_DIRECTORIES: '%WINDIR%\SysWOW64\testdir1' + fim_mode: whodata + metadata: + folder: syswow64 + fim_mode: whodata + redirected: false diff --git a/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/test_windows_system_folder_redirection.py b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/test_windows_system_folder_redirection.py new file mode 100644 index 0000000000..a11639368d --- /dev/null +++ b/tests/integration/test_fim/test_files/test_windows_system_folder_redirection/test_windows_system_folder_redirection.py @@ -0,0 +1,166 @@ +''' +copyright: Copyright (C) 2015-2023, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts when these files are + added, modified or deleted. Specifically, these tests will check that FIM is able to monitor Windows system + folders. FIM can redirect %WINDIR%/Sysnative monitoring toward System32 folder, so the tests also check that + when monitoring Sysnative the path is converted to system32 and events are generated there properly. + +components: + - fim + +suite: windows_system_folder_redirection + +targets: + - agent + +daemons: + - wazuh-syscheckd + +os_platform: + - windows + +os_version: + - Windows 10 + - Windows Server 2019 + - Windows Server 2016 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + +pytest_args: + - fim_mode: + scheduled: File monitoring is done after every configured interval elapses. + realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. + whodata: Implies real-time monitoring but adding the 'who-data' information. + - tier: + 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. + 1: Only level 1 tests are performed, they check functionalities of medium complexity. + 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. + +tags: + - windows_folder_redirection +''' +import os + + +import pytest +from wazuh_testing import LOG_FILE_PATH, T_10, T_60 +from wazuh_testing.tools import PREFIX, configuration +from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules.fim import TEST_DIR_1 +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import check_fim_event, CB_FIM_PATH_CONVERTED +from wazuh_testing.modules.fim.utils import regular_file_cud + + +# Marks +pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_windows_system_folder_redirection.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_windows_system_folder_redirection.yaml') + +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = configuration.get_test_cases_data(test_cases_path) +configurations = configuration.load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) + +# Variables +test_folders = [os.path.join(PREFIX, 'windows', 'System32', TEST_DIR_1), + os.path.join(PREFIX, 'windows', 'SysWOW64', TEST_DIR_1)] +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + +# Tests +@pytest.mark.parametrize('test_folders', [test_folders], ids='', scope='module') +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +def test_windows_system_monitoring(configuration, metadata, test_folders, set_wazuh_configuration, + create_monitored_folders_module, configure_local_internal_options_function, + restart_syscheck_function, wait_syscheck_start): + ''' + description: Check if the 'wazuh-syscheckd' monitors the windows system folders (System32 and SysWOW64) properly, + and that monitoring for Sysnative folder is redirected to System32 and works properly. + + test_phases: + - setup: + - Set wazuh configuration and local_internal_options. + - Create custom folder for monitoring + - Clean logs files and restart wazuh to apply the configuration. + - test: + - In case of monitoring Sysnative, check it is redirected to System32. + - Create, Update and Delete files in monitored folders, and check logs appear. + - teardown: + - Delete custom monitored folder + - Restore configuration + - Stop wazuh + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - test_folders: + type: dict + brief: List of folders to be created for monitoring. + - set_wazuh_configuration: + type: fixture + brief: Set ossec.conf configuration. + - create_monitored_folders_module: + type: fixture + brief: Create a given list of folders when the module starts. Delete the folders at the end of the module. + - configure_local_internal_options_function: + type: fixture + brief: Set local_internal_options.conf file. + - restart_syscheck_function: + type: fixture + brief: restart syscheckd daemon, and truncate the ossec.log. + - wait_syscheck_start: + type: fixture + brief: check that the starting FIM scan is detected. + + assertions: + - Verify that for each modified file a 'diff' file is generated. + - Verify that FIM events include the 'content_changes' field. + - Verify that FIM events truncate the modifications made in a monitored file + when it matches the 'nodiff' tag. + - Verify that FIM events include the modifications made in a monitored file + when it does not match the 'nodiff' tag. + + input_description: The file 'configuration_windows_system_folder_redirection.yaml' provides the configuration + template. + The file 'cases_windows_system_folder_redirection.yaml' provides the tes cases configuration + details for each test case. + + expected_output: + - r'.*fim_adjust_path.*Convert '(.*) to '(.*)' to process the FIM events.' + - r'.*Sending FIM event: (.+)$' ('added', 'modified', and 'deleted' events)' + ''' + file_list = [f"regular_file"] + folder = os.path.join(PREFIX, 'windows', metadata['folder'], TEST_DIR_1) + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + # If monitoring sysnative, check redirection log message + if metadata['redirected']: + check_fim_event(callback=CB_FIM_PATH_CONVERTED, timeout=T_10) + + # Create, Update and Delete files in monitored folder and check expected events are generated + regular_file_cud(folder, wazuh_log_monitor, file_list=file_list, min_timeout=T_60, triggers_event=True, + escaped=True) diff --git a/tests/integration/test_fim/test_registry/conftest.py b/tests/integration/test_fim/test_registry/conftest.py index 86eacd881f..72934c965e 100644 --- a/tests/integration/test_fim/test_registry/conftest.py +++ b/tests/integration/test_fim/test_registry/conftest.py @@ -4,10 +4,11 @@ import pytest -from wazuh_testing.fim import LOG_FILE_PATH, detect_initial_scan, detect_realtime_start, detect_whodata_start +from wazuh_testing import LOG_FILE_PATH from wazuh_testing.tools.file import truncate_file from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.services import control_service +from wazuh_testing.modules.fim.event_monitor import detect_initial_scan, detect_realtime_start, detect_whodata_start @pytest.fixture(scope='module') diff --git a/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_delete_registry.py b/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_delete_registry.py index e27ac02123..2af11d8747 100644 --- a/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_delete_registry.py +++ b/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_delete_registry.py @@ -176,8 +176,8 @@ def test_delete_registry(key, subkey, arch, value_list, check_time_travel(scheduled, monitor=wazuh_log_monitor) events = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_value_event, - accum_results=len(value_list), error_message='Did not receive expected ' - '"Sending FIM event: ..." event').result() + accum_results=len(value_list), + error_message='Did not receive expected "Sending FIM event: ..." event').result() for ev in events: validate_registry_value_event(ev, mode=mode) diff --git a/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_registry_duplicated_entries.py b/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_registry_duplicated_entries.py index 2b80916843..dcea9e48e6 100644 --- a/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_registry_duplicated_entries.py +++ b/tests/integration/test_fim/test_registry/test_registry_basic_usage/test_basic_usage_registry_duplicated_entries.py @@ -6,8 +6,8 @@ from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.utils import get_version -# Helper functions +# Helper functions def extra_configuration_after_yield(): fim.delete_registry(fim.registry_parser[key], sub_key_2, fim.KEY_WOW64_64KEY) @@ -23,11 +23,9 @@ def check_event_type_and_path(fim_event, monitored_registry): # Marks - pytestmark = [pytest.mark.win32, pytest.mark.tier(level=0)] # Variables - key = 'HKEY_LOCAL_MACHINE' classes_subkey = os.path.join('SOFTWARE', 'Classes') diff --git a/tests/integration/test_fim/test_registry/test_registry_file_limit/data/wazuh_conf.yaml b/tests/integration/test_fim/test_registry/test_registry_file_limit/data/wazuh_conf.yaml deleted file mode 100644 index 2347b3f284..0000000000 --- a/tests/integration/test_fim/test_registry/test_registry_file_limit/data/wazuh_conf.yaml +++ /dev/null @@ -1,43 +0,0 @@ ---- -#conf 1 -- tags: - - file_limit_registry_conf - apply_to_modules: - - test_registry_limit_capacity_alerts - - test_registry_limit_full - - test_registry_limit_values - sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - frequency: - value: 5 - - windows_registry: - value: WINDOWS_REGISTRY - attributes: - - arch: '64bit' - - file_limit: - elements: - - enabled: - value: 'yes' - - entries: - value: FILE_LIMIT - - section: sca - elements: - - enabled: - value: 'no' - - section: rootcheck - elements: - - disabled: - value: 'yes' - - section: active-response - elements: - - disabled: - value: 'yes' - - section: wodle - attributes: - - name: 'syscollector' - elements: - - disabled: - value: 'yes' diff --git a/tests/integration/test_fim/test_registry/test_registry_limit/data/wazuh_conf.yaml b/tests/integration/test_fim/test_registry/test_registry_limit/data/wazuh_conf.yaml new file mode 100644 index 0000000000..a2dcec0394 --- /dev/null +++ b/tests/integration/test_fim/test_registry/test_registry_limit/data/wazuh_conf.yaml @@ -0,0 +1,136 @@ +# conf 1 +- tags: + - fim_registry_limit + apply_to_modules: + - test_registry_limit_capacity_alerts + - test_registry_value_limit_full + - test_registry_limit_values + sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 5 + - windows_registry: + value: WINDOWS_REGISTRY + attributes: + - arch: 64bit + - registry_limit: + elements: + - enabled: + value: 'yes' + - entries: + value: REGISTRIES + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' + +# conf 2 +- tags: + - fim_registry_limit + apply_to_modules: + - test_registry_key_limit_full + sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 5 + - windows_registry: + value: WINDOWS_REGISTRY_1 + attributes: + - arch: 64bit + - windows_registry: + value: WINDOWS_REGISTRY_2 + attributes: + - arch: 64bit + - registry_limit: + elements: + - enabled: + value: 'yes' + - entries: + value: REGISTRIES + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' + +# conf 2 +- tags: + - fim_registry_limit + apply_to_modules: + - test_registry_key_limit_values + sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 5 + - windows_registry: + value: WINDOWS_REGISTRY_1 + attributes: + - arch: 64bit + - windows_registry: + value: WINDOWS_REGISTRY_2 + attributes: + - arch: 64bit + - windows_registry: + value: WINDOWS_REGISTRY_3 + attributes: + - arch: 64bit + - registry_limit: + elements: + - enabled: + value: 'yes' + - entries: + value: REGISTRIES + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: active-response + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_full.py b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_full.py new file mode 100644 index 0000000000..eba5425348 --- /dev/null +++ b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_full.py @@ -0,0 +1,179 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts + when these files are modified. Specifically, these tests will check if FIM events are + generated while the database is in 'full database alert' mode for reaching the limit + of entries to monitor set in the 'registry_limit'-'registries' tag. + The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks + configured files for changes to the checksums, permissions, and ownership. + +tier: 1 + +modules: + - fim + +components: + - agent + +daemons: + - wazuh-syscheckd + +os_platform: + - windows + +os_version: + - Windows 10 + - Windows 8 + - Windows 7 + - Windows Server 2019 + - Windows Server 2016 + - Windows Server 2012 + - Windows Server 2003 + - Windows XP + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html#file-limit + +pytest_args: + - fim_mode: + scheduled: file/registry changes are monitored only at the configured interval + - tier: + 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. + 1: Only level 1 tests are performed, they check functionalities of medium complexity. + 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. + +tags: + - fim_registry_limit +''' +import os +import pytest + +from wazuh_testing import global_parameters +from wazuh_testing.tools import LOG_FILE_PATH +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules import WINDOWS, TIER1 +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, registry_parser, + KEY_WOW64_64KEY, KEY_ALL_ACCESS, RegOpenKeyEx, RegCloseKey) +from wazuh_testing.modules.fim.event_monitor import (CB_REGISTRY_LIMIT_CAPACITY, ERR_MSG_DATABASE_FULL_ALERT, + ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT, CB_COUNT_REGISTRY_ENTRIES, + CB_DATABASE_FULL_COULD_NOT_INSERT_KEY, + ERR_MSG_FIM_REGISTRY_ENTRIES, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, + ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL) +from wazuh_testing.modules.fim.utils import generate_params, create_registry + +# Marks +pytestmark = [WINDOWS, TIER1] + + +# Variables +test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), + os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2)] +test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) +NUM_REGS = 2 +EXPECTED_DB_STATE = "100" +monitor_timeout = 40 + + +# Configurations +registry_limit_list = ['2'] +conf_params = {'WINDOWS_REGISTRY_1': test_regs[0], 'WINDOWS_REGISTRY_2': test_regs[1]} +params, metadata = generate_params(extra_params=conf_params, + apply_to_all=({'REGISTRIES': registry_limit_elem} for registry_limit_elem in + registry_limit_list), modes=['scheduled']) +configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') +configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) + + +# Fixtures +@pytest.fixture(scope='module', params=configurations) +def get_configuration(request): + """Get configurations from the module.""" + return request.param + + +# Functions +def extra_configuration_before_yield(): + """Generate registry entries to fill database""" + for key in (MONITORED_KEY, MONITORED_KEY_2): + reg_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], key, KEY_WOW64_64KEY) + reg_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], key, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) + RegCloseKey(reg_handle) + + +# Tests +def test_registry_key_limit_full(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): + ''' + description: Check if the 'wazuh-syscheckd' daemon generates proper events while the FIM database is in + 'full database alert' mode for reaching the limit of entries to monitor set in the 'registries' option + of the 'registry_limit' tag. + For this purpose, the test will set the a limit of keys to monitor, and will monitor a series of keys. + Then, it will try to add a new key and it will check if the FIM event 'full' is generated. Finally, the + test will verify that, in the FIM 'entries' event, the number of entries and monitored values match. + + wazuh_min_version: 4.5.0 + + parameters: + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. + - get_configuration: + type: fixture + brief: Get configurations from the module. + - configure_environment: + type: fixture + brief: Configure a custom environment for testing. + - restart_syscheckd: + type: fixture + brief: Clear the Wazuh logs file and start a new monitor. + + assertions: + - Verify that the FIM database is in 'full database alert' mode + when the maximum number of values to monitor has been reached. + - Verify that proper FIM events are generated while the database + is in 'full database alert' mode. + + input_description: A test case (fim_registry_limit) is contained in external YAML file (wazuh_conf.yaml) + which includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined + with the testing registry key to be monitored defined in this module. + + expected_output: + - r'.*Registry database is (\\d+)% full.' + - r'.*Couldn't insert ('.*') entry into DB. The DB is full.*' + - r'.*Fim registry entries count:.*' + + tags: + - scheduled + ''' + database_state = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_REGISTRY_LIMIT_CAPACITY), + error_message=ERR_MSG_DATABASE_FULL_ALERT).result() + + assert database_state == EXPECTED_DB_STATE, ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL + + reg_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY+'\\DB_FULL', + KEY_WOW64_64KEY) + reg_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY+'\\DB_FULL', 0, + KEY_ALL_ACCESS | KEY_WOW64_64KEY) + + RegCloseKey(reg_handle) + + wazuh_log_monitor.start(timeout=monitor_timeout, error_message=ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT, + callback=generate_monitoring_callback(CB_DATABASE_FULL_COULD_NOT_INSERT_KEY)) + + key_entries = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_COUNT_REGISTRY_ENTRIES), + error_message=ERR_MSG_FIM_REGISTRY_ENTRIES).result() + + assert key_entries == str(get_configuration['metadata']['registries']), ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_values.py b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_values.py new file mode 100644 index 0000000000..23ebd3c025 --- /dev/null +++ b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_key_limit_values.py @@ -0,0 +1,179 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts + when these files are modified. Specifically, these tests will check if the FIM event + 'maximum number of entries' has the correct value for the monitored entries limit of + the 'registries' option. + The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks configured + files for changes to the checksums, permissions, and ownership. + +tier: 1 + +modules: + - fim + +components: + - agent + +daemons: + - wazuh-syscheckd + +os_platform: + - windows + +os_version: + - Windows 10 + - Windows 8 + - Windows 7 + - Windows Server 2019 + - Windows Server 2016 + - Windows Server 2012 + - Windows Server 2003 + - Windows XP + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html#file-limit + +pytest_args: + - fim_mode: + scheduled: implies a scheduled scan + - tier: + 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. + 1: Only level 1 tests are performed, they check functionalities of medium complexity. + 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. + +tags: + - fim_registry_limit +''' +import os +import pytest +from wazuh_testing import LOG_FILE_PATH, global_parameters +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules import WINDOWS, TIER1 +from wazuh_testing.modules.fim import (registry_parser, KEY_WOW64_64KEY, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, + MONITORED_KEY_2, MONITORED_KEY_3, REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, + RegCloseKey) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (CB_REGISTRY_LIMIT_VALUE, ERR_MSG_REGISTRY_LIMIT_VALUES, + CB_COUNT_REGISTRY_ENTRIES, ERR_MSG_FIM_REGISTRY_ENTRIES, + ERR_MSG_WRONG_NUMBER_OF_ENTRIES, ERR_MSG_FIM_REGISTRY_ENTRIES, + ERR_MSG_WRONG_REGISTRY_LIMIT_VALUE, CB_COUNT_REGISTRY_ENTRIES) +from wazuh_testing.modules.fim.utils import generate_params, create_registry, modify_registry_value + + +# Marks +pytestmark = [WINDOWS, TIER1] + + +# Variables +test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), + os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2), + os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_3)] +test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) +monitor_timeout = 40 + + +# Configurations +registry_limit_list = ['1', '2', '3'] +conf_params = {'WINDOWS_REGISTRY_1': test_regs[0], + 'WINDOWS_REGISTRY_2': test_regs[1], + 'WINDOWS_REGISTRY_3': test_regs[2]} +params, metadata = generate_params(extra_params=conf_params, + apply_to_all=({'REGISTRIES': registry_limit_elem} for registry_limit_elem + in registry_limit_list), modes=['scheduled']) + +configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') +configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) + + +# Fixtures +@pytest.fixture(scope='module', params=configurations) +def get_configuration(request): + """Get configurations from the module.""" + return request.param + + +# Functions +def extra_configuration_before_yield(): + """Generate registry entries to fill database""" + for key in (MONITORED_KEY, MONITORED_KEY_2, MONITORED_KEY_3): + reg_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], key, KEY_WOW64_64KEY) + reg_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], key, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) + RegCloseKey(reg_handle) + + +# Tests +def test_registry_limit_values(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): + ''' + description: Check if the 'wazuh-syscheckd' daemon detects the value of the 'registries' tag, which corresponds to + the maximum number of entries to monitor from the 'db_entries_limit' option of FIM. For this purpose, + the test will monitor three keys, and the limit for registries to monitor, will change. Then it will + check that the FIM event 'maximum number of entries' generated has the correct value of registries. + Finally, the test will verify that, in the FIM 'entries' event, the number of entries and monitored + values match. + + wazuh_min_version: 4.4.0 + + parameters: + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. + - get_configuration: + type: fixture + brief: Get configurations from the module. + - configure_environment: + type: fixture + brief: Configure a custom environment for testing. + - restart_syscheckd: + type: fixture + brief: Clear the Wazuh logs file and start a new monitor. + + assertions: + - Verify that the FIM event 'maximum number of entries' has the correct value + for the monitored entries limit of the 'registries' option. + + input_description: A test case (fim_registry_limit) is contained in external YAML file (wazuh_conf.yaml) + which includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined + with the limits and the testing registry key to be monitored defined in this module. + + expected_output: + - r".Maximum number of registry values to be monitored: '(\\d+)'" + - r".*Fim registry entries count: '(\\d+)'" + + tags: + - scheduled + ''' + registry_limit = get_configuration['metadata']['registries'] + + for key in (MONITORED_KEY, MONITORED_KEY_2, MONITORED_KEY_3): + reg_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], key, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) + # Add values to registry plus 1 values over the registry limit + for i in range(0, int(registry_limit) + 1): + modify_registry_value(reg_handle, f'value_{i}', REG_SZ, 'added') + + # Look for the file limit value has been configured + registry_limit_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_REGISTRY_LIMIT_VALUE), + error_message=ERR_MSG_REGISTRY_LIMIT_VALUES).result() + + # Compare that the value configured is correct + assert registry_limit_value == registry_limit, ERR_MSG_WRONG_REGISTRY_LIMIT_VALUE + + # Get the ammount of entries monitored and assert they are the same as the limit and not over + key_entries = wazuh_log_monitor.start(timeout=monitor_timeout, + callback=generate_monitoring_callback(CB_COUNT_REGISTRY_ENTRIES), + error_message=ERR_MSG_FIM_REGISTRY_ENTRIES).result() + + assert key_entries == registry_limit, ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_capacity_alerts.py b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_capacity_alerts.py similarity index 66% rename from tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_capacity_alerts.py rename to tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_capacity_alerts.py index 718ddbc389..7afb314608 100644 --- a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_capacity_alerts.py +++ b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_capacity_alerts.py @@ -8,9 +8,10 @@ type: integration brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts - when these files are modified. Specifically, these tests will check if the threshold - set in the 'file_limit' tag generates FIM events when the number of monitored entries - approaches this value. + when these files are modified. Specifically, these tests will check if FIM events are + generated while the database is close to reaching the limit of entries to monitor set + in the 'registry_limit'-'entries' tag. + The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks configured files for changes to the checksums, permissions, and ownership. @@ -44,58 +45,58 @@ pytest_args: - fim_mode: - realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. - whodata: Implies real-time monitoring but adding the 'who-data' information. + scheduled: implies a scheduled scan - tier: 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. 1: Only level 1 tests are performed, they check functionalities of medium complexity. 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. tags: - - fim_registry_file_limit + - fim_registry_limit ''' import os from sys import platform import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params, modify_registry_value, wait_for_scheduled_scan, \ - delete_registry_value, registry_parser, KEY_WOW64_64KEY, callback_detect_end_scan, REG_SZ, KEY_ALL_ACCESS, \ - RegOpenKeyEx, RegCloseKey -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, CB_FILE_LIMIT_CAPACITY, - ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT, ERR_MSG_FIM_INODE_ENTRIES, CB_FILE_LIMIT_BACK_TO_NORMAL, - ERR_MSG_DB_BACK_TO_NORMAL, CB_COUNT_REGISTRY_FIM_ENTRIES, ERR_MSG_WRONG_NUMBER_OF_ENTRIES) +from wazuh_testing import global_parameters, LOG_FILE_PATH +from wazuh_testing.modules.fim import (registry_parser, KEY_WOW64_64KEY, REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, + RegCloseKey, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (callback_detect_end_scan, CB_REGISTRY_LIMIT_CAPACITY, + ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT, ERR_MSG_DB_BACK_TO_NORMAL, + ERR_MSG_FIM_REGISTRY_ENTRIES, CB_REGISTRY_DB_BACK_TO_NORMAL, + CB_COUNT_REGISTRY_VALUE_ENTRIES, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, + ERR_MSG_SCHEDULED_SCAN_ENDED) +from wazuh_testing.modules import WINDOWS, TIER1 from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules.fim.utils import (generate_params, modify_registry_value, wait_for_scheduled_scan, + delete_registry_value) if platform == 'win32': import pywintypes + # Marks +pytestmark = [WINDOWS, TIER1] -pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # Variables - test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY)] test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) scan_delay = 5 -# Configurations - -file_limit_list = ['100'] +# Configurations +registry_limit_list = ['100'] conf_params = {'WINDOWS_REGISTRY': test_regs[0]} params, metadata = generate_params(extra_params=conf_params, - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list), - modes=['scheduled']) - + apply_to_all=({'REGISTRIES': registry_limit_elem} for registry_limit_elem + in registry_limit_list), modes=['scheduled']) configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) # Fixtures - - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -103,11 +104,9 @@ def get_configuration(request): # Tests - - @pytest.mark.parametrize('percentage', [(80), (90), (0)]) -def test_file_limit_capacity_alert(percentage, get_configuration, configure_environment, restart_syscheckd, - wait_for_fim_start): +def test_registry_limit_capacity_alert(percentage, get_configuration, configure_local_internal_options_module, + configure_environment, restart_syscheckd, wait_for_fim_start): ''' description: Check if the 'wazuh-syscheckd' daemon generates events for different capacity thresholds limits when using the 'schedule' monitoring mode. For this purpose, the test will monitor a key in which @@ -116,7 +115,7 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi the total and when the number is less than that percentage. Finally, the test will verify that, in the FIM 'entries' event, the entries number is one unit more than the number of monitored values. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 @@ -124,12 +123,12 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi - percentage: type: int brief: Percentage of testing values to be created. - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. - get_configuration: type: fixture brief: Get configurations from the module. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - configure_environment: type: fixture brief: Configure a custom environment for testing. @@ -145,28 +144,28 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi exceeds the established threshold and viceversa. - Verify that FIM 'entries' events contain one unit more than the number of monitored values. - input_description: A test case (file_limit_registry_conf) is contained in external YAML file (wazuh_conf.yaml) + input_description: A test case (fim_registry_limit) is contained in external YAML file (wazuh_conf.yaml) which includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined with the percentages and the testing registry key to be monitored defined in this module. expected_output: - - r'.*Sending FIM event: (.+)$' ('added' events) - - r'.*Sending DB .* full alert.' - - r'.*Sending DB back to normal alert.' - - r'.*Fim registry entries' + - r".*Registry database is (\\d+)% full." + - r".*(The registry database status returns to normal)." + - r".*Fim registry value entries count: '(\\d+)'" tags: - scheduled ''' - limit = int(get_configuration['metadata']['file_limit']) + limit = int(get_configuration['metadata']['registries']) NUM_REGS = int(limit * (percentage / 100)) + 1 if percentage == 0: NUM_REGS = 0 - reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) - + reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | + KEY_WOW64_64KEY) + # Add registry values to fill the database up to alert generating percentage if percentage >= 80: # Percentages 80 and 90 for i in range(NUM_REGS): @@ -179,7 +178,7 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_end_scan, - error_message=ERR_MSG_FIM_INODE_ENTRIES) + error_message=ERR_MSG_SCHEDULED_SCAN_ENDED) for i in range(limit): try: @@ -195,17 +194,17 @@ def test_file_limit_capacity_alert(percentage, get_configuration, configure_envi if percentage >= 80: # Percentages 80 and 90 wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_FILE_LIMIT_CAPACITY), + callback=generate_monitoring_callback(CB_REGISTRY_LIMIT_CAPACITY), error_message=ERR_MSG_DATABASE_PERCENTAGE_FULL_ALERT).result() else: # Database back to normal wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_FILE_LIMIT_BACK_TO_NORMAL), + callback=generate_monitoring_callback(CB_REGISTRY_DB_BACK_TO_NORMAL), error_message=ERR_MSG_DB_BACK_TO_NORMAL).result() - entries = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_COUNT_REGISTRY_FIM_ENTRIES), - error_message=ERR_MSG_FIM_INODE_ENTRIES).result() + value_entries = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_COUNT_REGISTRY_VALUE_ENTRIES), + error_message=ERR_MSG_FIM_REGISTRY_ENTRIES).result() - # We add 1 because of the key created to hold the values - assert entries == str(NUM_REGS + 1), ERR_MSG_WRONG_NUMBER_OF_ENTRIES + # Assert the number of value_entries matches the ammount that was generated. + assert value_entries == str(NUM_REGS), ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_values.py b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_values.py similarity index 50% rename from tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_values.py rename to tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_values.py index 211cc0326e..bd27a86b32 100644 --- a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_values.py +++ b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_limit_values.py @@ -8,11 +8,12 @@ type: integration brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts - when these files are modified. Specifically, these tests will check if the FIM event - 'maximum number of entries' has the correct value for the monitored entries limit of - the 'file_limit' option. - The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks configured - files for changes to the checksums, permissions, and ownership. + when these files are modified. Specifically, these tests will check that after having a + limit configured for the 'entries' option for 'registry_limit' of syscheck, it will + only monitor values up to the specified limit and any excess will not be monitored. + + The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks + configured files for changes to the checksums, permissions, and ownership. components: - fim @@ -44,54 +45,55 @@ pytest_args: - fim_mode: - realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. - whodata: Implies real-time monitoring but adding the 'who-data' information. + scheduled: implies a scheduled scan - tier: 0: Only level 0 tests are performed, they check basic functionalities and are quick to perform. 1: Only level 1 tests are performed, they check functionalities of medium complexity. 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. tags: - - fim_registry_file_limit + - fim_registry_limit ''' -import os, sys + +import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, generate_params, modify_registry_value, registry_parser, KEY_WOW64_64KEY, - REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, RegCloseKey, create_registry) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, CB_FILE_LIMIT_VALUE, - ERR_MSG_FILE_LIMIT_VALUES, CB_COUNT_REGISTRY_FIM_ENTRIES, - ERR_MSG_FIM_INODE_ENTRIES, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, - ERR_MSG_WRONG_FILE_LIMIT_VALUE) + +from wazuh_testing import global_parameters, LOG_FILE_PATH from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules import WINDOWS, TIER1 +from wazuh_testing.fim import (generate_params, modify_registry_value, create_registry) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, registry_parser, KEY_WOW64_64KEY, + REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, RegCloseKey) +from wazuh_testing.modules.fim.event_monitor import (CB_REGISTRY_LIMIT_VALUE, ERR_MSG_FIM_REGISTRY_VALUE_ENTRIES, + ERR_MSG_REGISTRY_LIMIT_VALUES, CB_COUNT_REGISTRY_VALUE_ENTRIES, + ERR_MSG_WRONG_NUMBER_OF_ENTRIES, ERR_MSG_WRONG_FILE_LIMIT_VALUE) + # Marks +pytestmark = [WINDOWS, TIER1] -pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # Variables - test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY)] test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) monitor_timeout = 40 -# Configurations -file_limit_list = ['1', '1000'] +# Configurations +registry_limit_list = [10] conf_params = {'WINDOWS_REGISTRY': test_regs[0]} params, metadata = generate_params(extra_params=conf_params, - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list), - modes=['scheduled']) - + apply_to_all=({'REGISTRIES': registry_elem} for registry_elem + in registry_limit_list), modes=['scheduled']) configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) # Fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -99,32 +101,35 @@ def get_configuration(request): # Functions - def extra_configuration_before_yield(): """Generate registry entries to fill database""" - reg1_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, KEY_WOW64_64KEY) - - RegCloseKey(reg1_handle) + reg_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, KEY_WOW64_64KEY) + reg_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | + KEY_WOW64_64KEY) + # Add values to registry plus 1 values over the registry limit + for i in range(0, registry_limit_list[0] + 1): + modify_registry_value(reg_handle, f'value_{i}', REG_SZ, 'added') + RegCloseKey(reg_handle) # Tests -@pytest.mark.skip(reason="Blocked by issue wazuh/wazuh #11819") -def test_file_limit_values(get_configuration, configure_environment, restart_syscheckd): +def test_registry_limit_values(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' - description: Check if the 'wazuh-syscheckd' daemon detects the value of the 'entries' tag, which corresponds to - the maximum number of entries to monitor from the 'file_limit' option of FIM. For this purpose, + description: Check if the 'wazuh-syscheckd' daemon detects the value of the 'registries' tag, which corresponds to + the maximum number of entries to monitor from the 'registry_limit' option of FIM. For this purpose, the test will monitor a key in which multiple testing values will be added. Then, it will check if the FIM event 'maximum number of entries' is generated and has the correct value. Finally, the test - will verify that, in the FIM 'entries' event, the number of entries and monitored values match. + will verify that, in the FIM 'values entries' event, the number of entries and monitored values match. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -137,35 +142,31 @@ def test_file_limit_values(get_configuration, configure_environment, restart_sys assertions: - Verify that the FIM event 'maximum number of entries' has the correct value - for the monitored entries limit of the 'file_limit' option. + for the monitored entries limit of the 'registries' option. - input_description: A test case (file_limit_registry_conf) is contained in external YAML file (wazuh_conf.yaml) + input_description: A test case (fim_registry_limit) is contained in external YAML file (wazuh_conf.yaml) which includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined with the limits and the testing registry key to be monitored defined in this module. expected_output: - - r'.*Maximum number of entries to be monitored' - - r'.*Fim registry entries' + - r".*Maximum number of registry values to be monitored: '(\\d+)'" + - r".*Fim registry values entries count: '(\\d+)'" tags: - scheduled ''' - file_limit = get_configuration['metadata']['file_limit'] - reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) - # Add values to registry plus 10 values over the file limit - for i in range(0, int(file_limit) + 10): - modify_registry_value(reg1_handle, f'value_{i}', REG_SZ, 'added') - + registry_limit = get_configuration['metadata']['registries'] + # Look for the file limit value has been configured - file_limit_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_FILE_LIMIT_VALUE), - error_message=ERR_MSG_FILE_LIMIT_VALUES).result() + registry_limit_value = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_REGISTRY_LIMIT_VALUE), + error_message=ERR_MSG_REGISTRY_LIMIT_VALUES).result() # Compare that the value configured is correct - assert file_limit_value == get_configuration['metadata']['file_limit'], ERR_MSG_WRONG_FILE_LIMIT_VALUE + assert registry_limit_value == str(registry_limit), ERR_MSG_WRONG_FILE_LIMIT_VALUE # Get the ammount of entries monitored and assert they are the same as the limit and not over - entries = wazuh_log_monitor.start(timeout=monitor_timeout, - callback=generate_monitoring_callback(CB_COUNT_REGISTRY_FIM_ENTRIES), - error_message=ERR_MSG_FIM_INODE_ENTRIES).result() + value_entries = wazuh_log_monitor.start(timeout=monitor_timeout, + callback=generate_monitoring_callback(CB_COUNT_REGISTRY_VALUE_ENTRIES), + error_message=ERR_MSG_FIM_REGISTRY_VALUE_ENTRIES).result() - assert entries == str(get_configuration['metadata']['file_limit']), ERR_MSG_WRONG_NUMBER_OF_ENTRIES + assert value_entries == str(registry_limit), ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_full.py b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_value_limit_full.py similarity index 62% rename from tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_full.py rename to tests/integration/test_fim/test_registry/test_registry_limit/test_registry_value_limit_full.py index c06a425d0e..e3620d9f0a 100644 --- a/tests/integration/test_fim/test_registry/test_registry_file_limit/test_registry_limit_full.py +++ b/tests/integration/test_fim/test_registry/test_registry_limit/test_registry_value_limit_full.py @@ -10,7 +10,7 @@ brief: File Integrity Monitoring (FIM) system watches selected files and triggering alerts when these files are modified. Specifically, these tests will check if FIM events are generated while the database is in 'full database alert' mode for reaching the limit - of entries to monitor set in the 'file_limit' tag. + of entries to monitor set in the 'registry_limit'-'entries' tag. The FIM capability is managed by the 'wazuh-syscheckd' daemon, which checks configured files for changes to the checksums, permissions, and ownership. @@ -51,23 +51,28 @@ 2: Only level 2 tests are performed, they check advanced functionalities and are slow to perform. tags: - - fim_registry_file_limit + - fim_registry_limit ''' import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params, modify_registry_value, registry_parser, KEY_WOW64_64KEY, \ - REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, RegCloseKey, create_registry -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, CB_FILE_LIMIT_CAPACITY, - ERR_MSG_DATABASE_FULL_ALERT_EVENT, ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT, CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE, - CB_COUNT_REGISTRY_FIM_ENTRIES, ERR_MSG_FIM_INODE_ENTRIES, ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL, - ERR_MSG_WRONG_NUMBER_OF_ENTRIES) +from wazuh_testing import LOG_FILE_PATH, global_parameters from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules import WINDOWS, TIER1 +from wazuh_testing.modules.fim import (registry_parser, KEY_WOW64_64KEY, REG_SZ, KEY_ALL_ACCESS, RegOpenKeyEx, + RegCloseKey, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY) +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options +from wazuh_testing.modules.fim.event_monitor import (CB_REGISTRY_LIMIT_CAPACITY, CB_COUNT_REGISTRY_VALUE_ENTRIES, + CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE, + ERR_MSG_DATABASE_FULL_ALERT, ERR_MSG_WRONG_NUMBER_OF_ENTRIES, + ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT, + ERR_MSG_FIM_REGISTRY_VALUE_ENTRIES, + ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL) +from wazuh_testing.modules.fim.utils import generate_params, modify_registry_value, create_registry # Marks +pytestmark = [WINDOWS, TIER1] -pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # Variables test_reg = os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY) @@ -77,20 +82,18 @@ EXPECTED_DATABES_STATE = "100" monitor_timeout = 40 -# Configurations -file_limit_list = ['10'] +# Configurations +registry_limit_list = ['10'] conf_params = {'WINDOWS_REGISTRY': test_reg} params, metadata = generate_params(extra_params=conf_params, - apply_to_all=({'FILE_LIMIT': file_limit_elem} for file_limit_elem in file_limit_list), - modes=['scheduled']) - + apply_to_all=({'REGISTRIES': registry_limit_elem} for registry_limit_elem + in registry_limit_list), modes=['scheduled']) configurations_path = os.path.join(test_data_path, 'wazuh_conf.yaml') configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) # Fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -98,11 +101,11 @@ def get_configuration(request): # Functions - def extra_configuration_before_yield(): """Generate registry entries to fill database""" reg1_handle = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, KEY_WOW64_64KEY) - reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) + reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | + KEY_WOW64_64KEY) for i in range(0, NUM_REGS): modify_registry_value(reg1_handle, f'value_{i}', REG_SZ, 'added') @@ -111,23 +114,25 @@ def extra_configuration_before_yield(): # Tests -def test_file_limit_full(get_configuration, configure_environment, restart_syscheckd): +def test_registry_value_limit_full(configure_local_internal_options_module, get_configuration, configure_environment, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon generates proper events while the FIM database is in - 'full database alert' mode for reaching the limit of entries to monitor set in the 'file_limit' tag. + 'full database alert' mode for reaching the limit of entries to monitor set in the 'entries' option + of the 'registry_limit' tag. For this purpose, the test will monitor a key in which several testing values will be created until the entry monitoring limit is reached. Then, it will check if the FIM event 'full' is generated when a new testing value is added to the monitored key. Finally, the test will verify that, in the FIM 'entries' event, the number of entries and monitored values match. - wazuh_min_version: 4.2.0 + wazuh_min_version: 4.5.0 tier: 1 parameters: - - tags_to_apply: - type: set - brief: Run test if matches with a configuration identifier, skip otherwise. + - configure_local_internal_options_module: + type: fixture + brief: Set the local_internal_options for the test. - get_configuration: type: fixture brief: Get configurations from the module. @@ -144,35 +149,37 @@ def test_file_limit_full(get_configuration, configure_environment, restart_sysch - Verify that proper FIM events are generated while the database is in 'full database alert' mode. - input_description: A test case (file_limit_registry_conf) is contained in external YAML file (wazuh_conf.yaml) + input_description: A test case (fim_registry_limit) is contained in external YAML file (wazuh_conf.yaml) which includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined with the testing registry key to be monitored defined in this module. expected_output: - - r'.*Sending DB .* full alert.' - - r'.*The DB is full.*' - - r'.*Fim registry entries' + - r".*Registry database is (\\d+)% full." + - r".*Couldn't insert ('.*') entry into DB. The DB is full.*" + - r".*Fim registry values entries count: '(\\d+)'" tags: - scheduled ''' database_state = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_FILE_LIMIT_CAPACITY), - error_message=ERR_MSG_DATABASE_FULL_ALERT_EVENT).result() + callback=generate_monitoring_callback(CB_REGISTRY_LIMIT_CAPACITY), + error_message=ERR_MSG_DATABASE_FULL_ALERT).result() assert database_state == EXPECTED_DATABES_STATE, ERR_MSG_WRONG_VALUE_FOR_DATABASE_FULL - reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY) + reg1_handle = RegOpenKeyEx(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], MONITORED_KEY, 0, + KEY_ALL_ACCESS | KEY_WOW64_64KEY) modify_registry_value(reg1_handle, 'value_full', REG_SZ, 'added') RegCloseKey(reg1_handle) - wazuh_log_monitor.start(timeout=monitor_timeout, callback=generate_monitoring_callback(CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE), + wazuh_log_monitor.start(timeout=monitor_timeout, + callback=generate_monitoring_callback(CB_DATABASE_FULL_COULD_NOT_INSERT_VALUE), error_message=ERR_MSG_DATABASE_FULL_COULD_NOT_INSERT) - entries = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=generate_monitoring_callback(CB_COUNT_REGISTRY_FIM_ENTRIES), - error_message=ERR_MSG_FIM_INODE_ENTRIES).result() + value_entries = wazuh_log_monitor.start(timeout=monitor_timeout, + callback=generate_monitoring_callback(CB_COUNT_REGISTRY_VALUE_ENTRIES), + error_message=ERR_MSG_FIM_REGISTRY_VALUE_ENTRIES).result() - assert entries == str(get_configuration['metadata']['file_limit']), ERR_MSG_WRONG_NUMBER_OF_ENTRIES \ No newline at end of file + assert value_entries == str(get_configuration['metadata']['registries']), ERR_MSG_WRONG_NUMBER_OF_ENTRIES diff --git a/tests/integration/test_fim/test_registry/test_registry_multiple_registries/common.py b/tests/integration/test_fim/test_registry/test_registry_multiple_registries/common.py index 5593d274fd..9887bf6576 100644 --- a/tests/integration/test_fim/test_registry/test_registry_multiple_registries/common.py +++ b/tests/integration/test_fim/test_registry/test_registry_multiple_registries/common.py @@ -1,12 +1,14 @@ -# Copyright (C) 2015-2021, Wazuh Inc. +# Copyright (C) 2015-2023, Wazuh Inc. # Created by Wazuh, Inc. . # This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 import os -from wazuh_testing.fim import create_registry, registry_parser, check_time_travel, modify_registry, delete_registry, \ - callback_detect_event, validate_registry_key_event, KEY_WOW64_32KEY, modify_registry_value, delete_registry_value, \ - validate_registry_value_event, callback_value_event, REG_SZ, RegCloseKey +from wazuh_testing.modules.fim import registry_parser, KEY_WOW64_32KEY, REG_SZ, RegCloseKey +from wazuh_testing.modules.fim.classes import validate_registry_event +from wazuh_testing.modules.fim.event_monitor import callback_detect_event, callback_value_event +from wazuh_testing.modules.fim.utils import (create_registry, check_time_travel, modify_registry, delete_registry, + modify_registry_value, delete_registry_value) def multiple_keys_and_entries_keys(num_entries, subkeys, log_monitor, root_key, timeout=10): @@ -39,7 +41,7 @@ def perform_and_validate_events(func): error_message='Did not receive expected "Sending FIM event: ..." event').result() for ev in events: - validate_registry_key_event(ev) + validate_registry_event(ev, is_key=True) perform_and_validate_events(create_registry) perform_and_validate_events(modify_registry) @@ -81,7 +83,7 @@ def perform_and_validate_events(func, content='added', is_delete=False): error_message='Did not receive expected "Sending FIM event: ..." event').result() for ev in events: - validate_registry_value_event(ev) + validate_registry_event(ev, is_key=False) perform_and_validate_events(modify_registry_value) # Create perform_and_validate_events(modify_registry_value, content='modified') # Modify diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_bigger_file_limit.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_bigger_file_limit.py index 1ae893e0e7..d75f059776 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_bigger_file_limit.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_bigger_file_limit.py @@ -57,13 +57,12 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, KEY_WOW64_32KEY, KEY_WOW64_64KEY, generate_params, - calculate_registry_diff_paths, registry_value_create, registry_value_update, - registry_value_delete, registry_parser, create_values_content) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, - SIZE_LIMIT_CONFIGURED_VALUE, ERR_MSG_CONTENT_CHANGES_EMPTY, - ERR_MSG_CONTENT_CHANGES_NOT_EMPTY) +from wazuh_testing import LOG_FILE_PATH, global_parameters +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, KEY_WOW64_32KEY, + KEY_WOW64_64KEY, SIZE_LIMIT_CONFIGURED_VALUE) +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_CONTENT_CHANGES_EMPTY, ERR_MSG_CONTENT_CHANGES_NOT_EMPTY +from wazuh_testing.modules.fim.utils import (generate_params, calculate_registry_diff_paths, registry_value_create, + registry_value_update, registry_value_delete, create_values_content) from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor @@ -73,7 +72,7 @@ # Variables -test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), +test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2)] test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data") wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) @@ -93,8 +92,6 @@ configurations_path = os.path.join(test_data_path, "wazuh_registry_report_changes_limits_quota.yaml") configurations = load_wazuh_configurations(configurations_path, __name__, params=params, metadata=metadata) - - # Fixtures @@ -104,15 +101,11 @@ def get_configuration(request): return request.param - @pytest.mark.parametrize("size", [(8192), (32768)]) @pytest.mark.parametrize("key, subkey, arch, value_name", - [ - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_64KEY, "some_value"), - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_32KEY, "some_value"), - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2, KEY_WOW64_64KEY, "some_value"), - ], -) + [(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_64KEY, "some_value"), + (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_32KEY, "some_value"), + (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2, KEY_WOW64_64KEY, "some_value"), ],) def test_disk_quota_values(key, subkey, arch, value_name, size, get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start): """ diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_values.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_values.py index 22c9c306fe..cf100fe75e 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_values.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_disk_quota/test_registry_disk_quota_values.py @@ -57,15 +57,15 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, KEY_WOW64_32KEY, KEY_WOW64_64KEY, generate_params, - calculate_registry_diff_paths, registry_value_create, registry_value_update, - registry_value_delete, registry_parser, create_values_content) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, - SIZE_LIMIT_CONFIGURED_VALUE, ERR_MSG_CONTENT_CHANGES_EMPTY, - ERR_MSG_CONTENT_CHANGES_NOT_EMPTY) +from wazuh_testing import LOG_FILE_PATH, global_parameters from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, + SIZE_LIMIT_CONFIGURED_VALUE, KEY_WOW64_32KEY, KEY_WOW64_64KEY) +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_CONTENT_CHANGES_EMPTY, ERR_MSG_CONTENT_CHANGES_NOT_EMPTY +from wazuh_testing.modules.fim.utils import (generate_params, calculate_registry_diff_paths, registry_value_create, + registry_value_update, registry_value_delete, create_values_content) + # Marks @@ -73,7 +73,7 @@ # Variables -test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), +test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2)] test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data") wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) @@ -105,12 +105,11 @@ def get_configuration(request): @pytest.mark.parametrize("size", [(4096), (32768)]) @pytest.mark.parametrize("key, subkey, arch, value_name", - [ - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_64KEY, "some_value"), - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_32KEY, "some_value"), - (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2, KEY_WOW64_64KEY, "some_value"), - ], -) + [ + (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_64KEY, "some_value"), + (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, KEY_WOW64_32KEY, "some_value"), + (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY_2, KEY_WOW64_64KEY, "some_value"), + ]) def test_disk_quota_values(key, subkey, arch, value_name, size, get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start): """ @@ -120,7 +119,7 @@ def test_disk_quota_values(key, subkey, arch, value_name, size, get_configuratio limit, and increase its size on each test case. Finally, the test will verify that the compressed file has been created, and the related FIM event includes the 'content_changes' field if the value size does not exceed the specified limit and viceversa. - - Case 1, small file - when compressed it will be less than the disk_quota. The file is generated + - Case 1, small file - when compressed it will be less than the disk_quota. The file is generated and the logs have content_changes data. - Case 2, big size - when compressed the file would be bigger than the disk_quota. The file is not generated and the logs should not have content_changes data. diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_all_limits_disabled.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_all_limits_disabled.py index c66437fbf2..9e500ffc94 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_all_limits_disabled.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_all_limits_disabled.py @@ -58,14 +58,14 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, registry_value_create, registry_value_update, registry_value_delete, - KEY_WOW64_32KEY, KEY_WOW64_64KEY, generate_params, calculate_registry_diff_paths, - create_values_content) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, - SIZE_LIMIT_CONFIGURED_VALUE, ERR_MSG_CONTENT_CHANGES_EMPTY) +from wazuh_testing import LOG_FILE_PATH, global_parameters from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, KEY_WOW64_32KEY, + KEY_WOW64_64KEY) +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_CONTENT_CHANGES_EMPTY +from wazuh_testing.modules.fim.utils import (generate_params, calculate_registry_diff_paths, create_values_content, + registry_value_create, registry_value_update, registry_value_delete) # Marks @@ -83,11 +83,11 @@ # Configurations params, metadata = generate_params(modes=['scheduled'], extra_params={'WINDOWS_REGISTRY_1': test_regs[0], - 'WINDOWS_REGISTRY_2': test_regs[1], - 'FILE_SIZE_ENABLED': 'no', - 'FILE_SIZE_LIMIT': '10KB', - 'DISK_QUOTA_ENABLED': 'no', - 'DISK_QUOTA_LIMIT': '4KB'}) + 'WINDOWS_REGISTRY_2': test_regs[1], + 'FILE_SIZE_ENABLED': 'no', + 'FILE_SIZE_LIMIT': '10KB', + 'DISK_QUOTA_ENABLED': 'no', + 'DISK_QUOTA_LIMIT': '4KB'}) configurations_path = os.path.join(test_data_path, 'wazuh_registry_report_changes_limits_quota.yaml') diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_diff_size_limit_values.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_diff_size_limit_values.py index 98f41e2391..1379b1f72a 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_diff_size_limit_values.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_diff_size_limit_values.py @@ -57,13 +57,12 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import (LOG_FILE_PATH, registry_value_create, registry_value_update, registry_value_delete, - KEY_WOW64_32KEY, KEY_WOW64_64KEY, generate_params, calculate_registry_diff_paths, - create_values_content) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, - SIZE_LIMIT_CONFIGURED_VALUE, ERR_MSG_CONTENT_CHANGES_EMPTY, - ERR_MSG_CONTENT_CHANGES_NOT_EMPTY) +from wazuh_testing import LOG_FILE_PATH, global_parameters +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, + KEY_WOW64_32KEY, KEY_WOW64_64KEY, SIZE_LIMIT_CONFIGURED_VALUE) +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_CONTENT_CHANGES_EMPTY, ERR_MSG_CONTENT_CHANGES_NOT_EMPTY +from wazuh_testing.modules.fim.utils import (registry_value_create, registry_value_update, registry_value_delete, + generate_params, calculate_registry_diff_paths, create_values_content) from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_file_size_values.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_file_size_values.py index ea41ef7b0e..3cb4d60755 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_file_size_values.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_file_size_values.py @@ -61,15 +61,16 @@ from wazuh_testing.fim import (LOG_FILE_PATH, registry_value_create, registry_value_update, registry_value_delete, KEY_WOW64_32KEY, KEY_WOW64_64KEY, generate_params, calculate_registry_diff_paths, create_values_content) -from wazuh_testing.fim_module import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, - SIZE_LIMIT_CONFIGURED_VALUE, ERR_MSG_CONTENT_CHANGES_EMPTY, - ERR_MSG_CONTENT_CHANGES_NOT_EMPTY) +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, MONITORED_KEY_2, + SIZE_LIMIT_CONFIGURED_VALUE) +from wazuh_testing.modules.fim.event_monitor import ERR_MSG_CONTENT_CHANGES_EMPTY, ERR_MSG_CONTENT_CHANGES_NOT_EMPTY from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules import WINDOWS, TIER1 # Marks -pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] +pytestmark = [WINDOWS, TIER1] # Variables @@ -82,10 +83,9 @@ # Configurations params, metadata = generate_params(modes=['scheduled'], extra_params={'WINDOWS_REGISTRY_1': test_regs[0], - 'WINDOWS_REGISTRY_2': test_regs[1], - 'FILE_SIZE_ENABLED': 'yes', - 'FILE_SIZE_LIMIT': '10KB' - }) + 'WINDOWS_REGISTRY_2': test_regs[1], + 'FILE_SIZE_ENABLED': 'yes', + 'FILE_SIZE_LIMIT': '10KB'}) configurations_path = os.path.join(test_data_path, 'wazuh_registry_file_size_values.yaml') @@ -115,9 +115,9 @@ def test_file_size_values(key, subkey, arch, value_name, size, get_configuration its size on each test case. Finally, the test will verify that the compressed 'diff' file has been created, and the related FIM event includes the 'content_changes' field if the value size does not exceed the specified limit and vice versa. - - Case 1, small size - The file is smaller than the file_limit configured, the diff_file is + - Case 1, small size - The file is smaller than the file_limit configured, the diff_file is generated and there is content_changes information - - Case 2, big size - The file is smaller than the file_limit configured,sp the diff_file is + - Case 2, big size - The file is smaller than the file_limit configured, so the diff_file is not generated and the logs should not have content_changes data. wazuh_min_version: 4.2.0 @@ -189,7 +189,7 @@ def report_changes_validator_diff(event): callback_test = report_changes_validator_no_diff else: callback_test = report_changes_validator_diff - + # Create the value inside the key - we do it here because it key or arch is not known before the test launches registry_value_create(key, subkey, wazuh_log_monitor, arch=arch, value_list=values, wait_for_scan=True, scan_delay=scan_delay, min_timeout=global_parameters.default_timeout, triggers_event=True) diff --git a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_report_changes_more_changes.py b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_report_changes_more_changes.py index a4f5519fb3..f8cea6ede6 100644 --- a/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_report_changes_more_changes.py +++ b/tests/integration/test_fim/test_registry/test_registry_report_changes/test_registry_report_changes_more_changes.py @@ -57,7 +57,7 @@ import os import pytest -from test_fim.test_files.test_report_changes.common import generate_string +from test_fim.common import generate_string from wazuh_testing import global_parameters from wazuh_testing.fim import LOG_FILE_PATH, calculate_registry_diff_paths, registry_value_cud, KEY_WOW64_32KEY, \ KEY_WOW64_64KEY, generate_params diff --git a/tests/integration/test_fim/test_synchronization/conftest.py b/tests/integration/test_fim/test_synchronization/conftest.py index 5782984fa5..c8838dbd40 100644 --- a/tests/integration/test_fim/test_synchronization/conftest.py +++ b/tests/integration/test_fim/test_synchronization/conftest.py @@ -1,52 +1,51 @@ -# Copyright (C) 2015-2021, Wazuh Inc. -# Created by Wazuh, Inc. . -# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 - -import pytest - -from wazuh_testing.fim import LOG_FILE_PATH, detect_initial_scan, detect_realtime_start, detect_whodata_start -from wazuh_testing.tools.file import truncate_file -from wazuh_testing.tools.monitoring import FileMonitor -from wazuh_testing.tools.services import control_service - - -@pytest.fixture(scope='module') -def restart_syscheckd(get_configuration, request): - """ - Reset ossec.log and start a new monitor. - """ - control_service('stop', daemon='wazuh-syscheckd') - truncate_file(LOG_FILE_PATH) - file_monitor = FileMonitor(LOG_FILE_PATH) - setattr(request.module, 'wazuh_log_monitor', file_monitor) - control_service('start', daemon='wazuh-syscheckd') - - -@pytest.fixture(scope='module') -def wait_for_fim_start(get_configuration, request): - """ - Wait for realtime start, whodata start or end of initial FIM scan. - """ - file_monitor = getattr(request.module, 'wazuh_log_monitor') - mode_key = 'fim_mode' if 'fim_mode2' not in get_configuration['metadata'] else 'fim_mode2' - - try: - if get_configuration['metadata'][mode_key] == 'realtime': - detect_realtime_start(file_monitor) - elif get_configuration['metadata'][mode_key] == 'whodata': - detect_whodata_start(file_monitor) - else: # scheduled - detect_initial_scan(file_monitor) - except KeyError: - detect_initial_scan(file_monitor) - - -@pytest.fixture(scope='module') -def wait_for_fim_start_sync_disabled(request): - """ - Wait for en of initial FIM scan. - - If detect_realtime_start is used, the synchronization event is skipped and the test fails. - """ - file_monitor = getattr(request.module, 'wazuh_log_monitor') - detect_initial_scan(file_monitor) +# Copyright (C) 2015-2022, Wazuh Inc. + +# Created by Wazuh, Inc. . +# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +import pytest + +from wazuh_testing.fim import LOG_FILE_PATH, detect_initial_scan, detect_realtime_start, detect_whodata_start +from wazuh_testing.tools.file import truncate_file +from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.tools.services import control_service + + +@pytest.fixture(scope='module') +def restart_syscheckd(get_configuration, request): + """ + Reset ossec.log and start a new monitor. + """ + control_service('stop', daemon='wazuh-syscheckd') + truncate_file(LOG_FILE_PATH) + file_monitor = FileMonitor(LOG_FILE_PATH) + setattr(request.module, 'wazuh_log_monitor', file_monitor) + control_service('start', daemon='wazuh-syscheckd') + + +@pytest.fixture(scope='module') +def wait_for_fim_start(get_configuration, request): + """ + Wait for realtime start, whodata start or end of initial FIM scan. + """ + file_monitor = getattr(request.module, 'wazuh_log_monitor') + mode_key = 'fim_mode' if 'fim_mode2' not in get_configuration['metadata'] else 'fim_mode2' + + try: + if get_configuration['metadata'][mode_key] == 'realtime': + detect_realtime_start(file_monitor) + elif get_configuration['metadata'][mode_key] == 'whodata': + detect_whodata_start(file_monitor) + else: # scheduled + detect_initial_scan(file_monitor) + except KeyError: + detect_initial_scan(file_monitor) + + +@pytest.fixture(scope='module') +def wait_for_fim_start_sync(request): + """ + Wait for the sync initial FIM scan. + """ + file_monitor = getattr(request.module, 'wazuh_log_monitor') + detect_initial_scan(file_monitor) diff --git a/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_overlap.yaml b/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_overlap.yaml new file mode 100644 index 0000000000..10d7e317f2 --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_overlap.yaml @@ -0,0 +1,38 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 20 + - directories: + value: MONITORED_DIR + attributes: + - check_all: 'yes' + - synchronization: + elements: + - interval: + value: INTERVAL + - response_timeout: + value: RESPONSE_TIMEOUT + - max_interval: + value: MAX_INTERVAL + - max_eps: + value: 1 + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_time.yaml b/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_time.yaml new file mode 100644 index 0000000000..10d7e317f2 --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/data/configuration_template/configuration_sync_time.yaml @@ -0,0 +1,38 @@ +- sections: + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 20 + - directories: + value: MONITORED_DIR + attributes: + - check_all: 'yes' + - synchronization: + elements: + - interval: + value: INTERVAL + - response_timeout: + value: RESPONSE_TIMEOUT + - max_interval: + value: MAX_INTERVAL + - max_eps: + value: 1 + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_overlap.yaml b/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_overlap.yaml new file mode 100644 index 0000000000..9ece9baffd --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_overlap.yaml @@ -0,0 +1,111 @@ +- name: Doubled sync_interval smaller than max_interval + description: Detect that sync_interval is doubled when a new sync tries to start before sync has ended + configuration_parameters: + INTERVAL: 3 + RESPONSE_TIMEOUT: 3 + MAX_INTERVAL: 20 + metadata: + interval: 3 + response_timeout: 3 + doubled_times: 1 + max_interval: 20 + files: 5 + lower: false + +- name: Doubled sync_interval capped at max_interval + description: Detect that when doubled sync_interval will not exceed max_interval + configuration_parameters: + INTERVAL: 5 + RESPONSE_TIMEOUT: 5 + MAX_INTERVAL: 8 + metadata: + interval: 5 + response_timeout: 5 + doubled_times: 1 + max_interval: 8 + files: 10 + lower: false + +- name: response_timeout lower than interval + description: Detect behavior when response_timeout is lower than interval + configuration_parameters: + INTERVAL: 5 + RESPONSE_TIMEOUT: 3 + MAX_INTERVAL: 8 + metadata: + interval: 5 + response_timeout: 3 + doubled_times: 1 + max_interval: 8 + files: 10 + lower: false + +- name: Invalid response_timeout value + description: Check behavior when invalid (string) value is configured for response_timeout + configuration_parameters: + INTERVAL: 3 + RESPONSE_TIMEOUT: invalid + MAX_INTERVAL: 20 + metadata: + interval: 3 + response_timeout: invalid + doubled_times: 3 + max_interval: 20 + files: 5 + lower: false + +- name: Invalid response_timeout value (negative) + description: Check behavior when invalid (negative) value is configured for response_timeout + configuration_parameters: + INTERVAL: 3 + RESPONSE_TIMEOUT: -1 + MAX_INTERVAL: 20 + metadata: + interval: 3 + response_timeout: -1 + doubled_times: 3 + max_interval: 20 + files: 5 + lower: false + +- name: Invalid max_interval value (String) + description: Check behavior when invalid (string) value is configured for max_interval + configuration_parameters: + INTERVAL: 3 + RESPONSE_TIMEOUT: 5 + MAX_INTERVAL: invalid + metadata: + interval: 3 + response_timeout: 5 + doubled_times: 1 + max_interval: invalid + files: 5 + lower: false + +- name: Invalid max_interval value (Lower than interval) + description: Detect when max_interval is lower than interval, the value is set to be equal to interval + configuration_parameters: + INTERVAL: 5 + RESPONSE_TIMEOUT: 3 + MAX_INTERVAL: 3 + metadata: + interval: 5 + response_timeout: 3 + doubled_times: 1 + max_interval: 3 + files: 5 + lower: true + +- name: Invalid max_interval value (negative value) + description: Detect when max_interval is negative, max interval is not set + configuration_parameters: + INTERVAL: 5 + RESPONSE_TIMEOUT: 3 + MAX_INTERVAL: -1 + metadata: + interval: 5 + response_timeout: 3 + doubled_times: 1 + max_interval: invalid + files: 5 + lower: false diff --git a/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_time.yaml b/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_time.yaml new file mode 100644 index 0000000000..de3e90660a --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/data/test_cases/cases_sync_time.yaml @@ -0,0 +1,11 @@ +- name: Short sync + description: Check behavior when sync is shorter than interval and max_interval + configuration_parameters: + INTERVAL: 10 + RESPONSE_TIMEOUT: 5 + MAX_INTERVAL: 10 + metadata: + interval: 10 + response_timeout: 5 + max_interval: 10 + files: 3 diff --git a/tests/integration/test_fim/test_synchronization/data/wazuh_conf_integrity_scan_win32.yaml b/tests/integration/test_fim/test_synchronization/data/wazuh_conf_integrity_scan_win32.yaml index 0d18d85e38..061a59341f 100644 --- a/tests/integration/test_fim/test_synchronization/data/wazuh_conf_integrity_scan_win32.yaml +++ b/tests/integration/test_fim/test_synchronization/data/wazuh_conf_integrity_scan_win32.yaml @@ -1,19 +1,33 @@ ---- -# conf 1 - tags: - - synchronize_events_conf + - apply_to_modules: - - test_synchronize_integrity_win32 + - test_synchronize_integrity_win32 sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - windows_registry: - value: TEST_REGS - attributes: - - arch: "both" + - section: syscheck + elements: + - disabled: + value: 'no' + - frequency: + value: 40 + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - windows_registry: + value: TEST_REGS + attributes: + - arch: both + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_synchronization/data/wazuh_conf_registry_responses_win32.yaml b/tests/integration/test_fim/test_synchronization/data/wazuh_conf_registry_responses_win32.yaml index 4f97ae7ef4..25c437caea 100644 --- a/tests/integration/test_fim/test_synchronization/data/wazuh_conf_registry_responses_win32.yaml +++ b/tests/integration/test_fim/test_synchronization/data/wazuh_conf_registry_responses_win32.yaml @@ -1,22 +1,35 @@ ---- # conf 1 - tags: - - registry_sync_responses + - registry_sync_responses apply_to_modules: - - test_registry_responses_win32 + - test_registry_responses_win32 sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - windows_registry: - value: WINDOWS_REGISTRY - attributes: - - check_mtime: 'no' - - arch: '64bit' - - synchronization: - elements: - - interval: - value: SYNC_INTERVAL - - max_interval: - value: SYNC_INTERVAL + - section: syscheck + elements: + - disabled: + value: 'no' + - windows_registry: + value: WINDOWS_REGISTRY + attributes: + - check_mtime: 'no' + - arch: 64bit + - synchronization: + elements: + - interval: + value: SYNC_INTERVAL + - max_interval: + value: SYNC_INTERVAL + - section: sca + elements: + - enabled: + value: 'no' + - section: rootcheck + elements: + - disabled: + value: 'yes' + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_fim/test_synchronization/data/wazuh_sync_conf_win32.yaml b/tests/integration/test_fim/test_synchronization/data/wazuh_sync_conf_win32.yaml index 16f5c5b221..b083453bde 100644 --- a/tests/integration/test_fim/test_synchronization/data/wazuh_sync_conf_win32.yaml +++ b/tests/integration/test_fim/test_synchronization/data/wazuh_sync_conf_win32.yaml @@ -1,27 +1,26 @@ - # Configuration for sync disabled - tags: - - sync_disabled + - sync_disabled apply_to_modules: - - test_sync_disabled_win32 - - test_sync_enabled_win32 - - test_sync_registry_enabled_win32 + - test_sync_disabled_win32 + - test_sync_enabled_win32 + - test_sync_registry_enabled_win32 sections: - - section: syscheck - elements: - - disabled: - value: 'no' - - synchronization: - elements: - - enabled: - value: SYNCHRONIZATION_ENABLED - - registry_enabled: - value: SYNCHRONIZATION_REGISTRY_ENABLED - - directories: - value: TEST_DIRECTORIES - attributes: - - FIM_MODE - - windows_registry: - value: TEST_REGISTRIES - attributes: - - arch: "both" \ No newline at end of file + - section: syscheck + elements: + - disabled: + value: 'no' + - synchronization: + elements: + - enabled: + value: SYNCHRONIZATION_ENABLED + - registry_enabled: + value: SYNCHRONIZATION_REGISTRY_ENABLED + - directories: + value: TEST_DIRECTORIES + attributes: + - FIM_MODE + - windows_registry: + value: TEST_REGISTRIES + attributes: + - arch: both diff --git a/tests/integration/test_fim/test_synchronization/test_registry_responses_win32.py b/tests/integration/test_fim/test_synchronization/test_registry_responses_win32.py index fb59cc68de..d4254cd359 100644 --- a/tests/integration/test_fim/test_synchronization/test_registry_responses_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_registry_responses_win32.py @@ -56,40 +56,35 @@ ''' import os import pytest -from wazuh_testing.fim import (generate_params, create_registry, modify_registry_value, registry_parser, - KEY_WOW64_64KEY, REG_SZ) +from wazuh_testing import LOG_FILE_PATH, DATA, WAZUH_SERVICES_START from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor from wazuh_testing.tools.services import control_service -from wazuh_testing.fim_module.fim_synchronization import find_value_in_event_list, get_sync_msgs -from wazuh_testing.fim_module.fim_variables import (SCHEDULE_MODE, WINDOWS_REGISTRY, SYNC_INTERVAL, SYNC_INTERVAL_VALUE, - YAML_CONF_REGISTRY_RESPONSE, WINDOWS_HKEY_LOCAL_MACHINE, - MONITORED_KEY) -from wazuh_testing.wazuh_variables import DATA, WAZUH_SERVICES_START, WINDOWS_DEBUG, VERBOSE_DEBUG_OUTPUT - +from wazuh_testing.modules.fim.utils import (find_value_in_event_list, get_sync_msgs, generate_params, create_registry, + modify_registry_value) +from wazuh_testing.modules.fim import (FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS, SCHEDULED_MODE, WINDOWS_REGISTRY, + SYNC_INTERVAL, SYNC_INTERVAL_VALUE, YAML_CONF_REGISTRY_RESPONSE, REG_SZ, + WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, registry_parser, KEY_WOW64_64KEY) +from wazuh_testing.modules.fim.event_monitor import detect_initial_scan # Marks - pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] -# variables - +# variables test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), DATA) configurations_path = os.path.join(test_data_path, YAML_CONF_REGISTRY_RESPONSE) conf_params = {WINDOWS_REGISTRY: os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY), - SYNC_INTERVAL: SYNC_INTERVAL_VALUE} + SYNC_INTERVAL: 10} # configurations - -conf_params, conf_metadata = generate_params(extra_params=conf_params, modes=[SCHEDULE_MODE]) +conf_params, conf_metadata = generate_params(extra_params=conf_params, modes=[SCHEDULED_MODE]) configurations = load_wazuh_configurations(configurations_path, __name__, params=conf_params, metadata=conf_metadata) -local_internal_options = {WINDOWS_DEBUG: VERBOSE_DEBUG_OUTPUT} +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS # fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -97,8 +92,6 @@ def get_configuration(request): # tests - - @pytest.mark.parametrize('key_name', [':subkey1', 'subkey2:', ':subkey3:']) @pytest.mark.parametrize('value_name', [':value1', 'value2:', ':value3:']) def test_registry_sync_after_restart(key_name, value_name, configure_local_internal_options_module, @@ -124,7 +117,7 @@ def test_registry_sync_after_restart(key_name, value_name, configure_local_inter brief: Name of the value that will be created in the test. - configure_local_internal_options_module: type: fixture - brief: Configure the local internal options file. + brief: Configure the local internal options file. - get_configuration: type: fixture brief: Get configurations from the module. @@ -150,7 +143,6 @@ def test_registry_sync_after_restart(key_name, value_name, configure_local_inter tags: - scheduled - - time_travel ''' key_path = os.path.join(MONITORED_KEY, key_name) value_path = os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, key_path, value_name) @@ -160,9 +152,9 @@ def test_registry_sync_after_restart(key_name, value_name, configure_local_inter modify_registry_value(key_handle, value_name, REG_SZ, 'This is a test with syscheckd down.') control_service(WAZUH_SERVICES_START) + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + detect_initial_scan(wazuh_log_monitor) + events = get_sync_msgs(timeout=SYNC_INTERVAL_VALUE) - events = get_sync_msgs(SYNC_INTERVAL_VALUE) - - assert find_value_in_event_list( - os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, key_path), value_name, - events) is not None, f"No sync event was found for {value_path}" + assert find_value_in_event_list(os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, key_path), value_name, + events) is not None, f"No sync event was found for {value_path}" diff --git a/tests/integration/test_fim/test_synchronization/test_sync_disabled.py b/tests/integration/test_fim/test_synchronization/test_sync_disabled.py index 3ee4205612..b5589ef636 100644 --- a/tests/integration/test_fim/test_synchronization/test_sync_disabled.py +++ b/tests/integration/test_fim/test_synchronization/test_sync_disabled.py @@ -94,7 +94,7 @@ def get_configuration(request): # Tests -def test_sync_disabled(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start): +def test_sync_disabled(get_configuration, configure_environment, install_audit, restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon uses the value of the 'enabled' tag to disable the file synchronization. For this purpose, the test will monitor a testing directory, @@ -112,12 +112,12 @@ def test_sync_disabled(get_configuration, configure_environment, restart_syschec - configure_environment: type: fixture brief: Configure a custom environment for testing. + - install_audit: + type: fixture + brief: install audit to check whodata. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start: - type: fixture - brief: Wait for realtime start, whodata start, or end of initial FIM scan. assertions: - Verify that no FIM 'integrity' event is generated when the value diff --git a/tests/integration/test_fim/test_synchronization/test_sync_disabled_win32.py b/tests/integration/test_fim/test_synchronization/test_sync_disabled_win32.py index d56281980b..f8d4d82ac5 100644 --- a/tests/integration/test_fim/test_synchronization/test_sync_disabled_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_sync_disabled_win32.py @@ -57,18 +57,18 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR, DATA from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.wazuh_variables import DATA -from wazuh_testing.fim_module.fim_variables import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, - YAML_CONF_SYNC_WIN32, TEST_DIRECTORIES, TEST_REGISTRIES, - SYNCHRONIZATION_ENABLED, CB_INTEGRITY_CONTROL_MESSAGE, - SYNCHRONIZATION_REGISTRY_ENABLED) -# Marks +from wazuh_testing.modules.fim import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, TEST_DIRECTORIES, + YAML_CONF_SYNC_WIN32, TEST_REGISTRIES, SYNCHRONIZATION_ENABLED, + SYNCHRONIZATION_REGISTRY_ENABLED) +from wazuh_testing.modules.fim.event_monitor import CB_INTEGRITY_CONTROL_MESSAGE +from wazuh_testing.modules.fim.utils import generate_params +# Marks pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # variables @@ -95,16 +95,21 @@ # fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" return request.param -# Tests +@pytest.fixture(scope='module') +def create_a_file(get_configuration): + """Create a file previous to restart syscheckd""" + create_file(REGULAR, test_directories[0], 'test_file.txt') + -def test_sync_disabled(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start_sync_disabled): +# Tests +def test_sync_disabled(get_configuration, configure_environment, create_a_file, restart_syscheckd, + wait_for_fim_start_sync): ''' description: Check if the 'wazuh-syscheckd' daemon uses the value of the 'enabled' tag to start/stop the file/registry synchronization. For this purpose, the test will monitor a directory/key. @@ -125,7 +130,7 @@ def test_sync_disabled(get_configuration, configure_environment, restart_syschec - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start_sync_disabled: + - wait_for_fim_start_sync: type: fixture brief: Wait for end of initial FIM scan. assertions: diff --git a/tests/integration/test_fim/test_synchronization/test_sync_enabled_win32.py b/tests/integration/test_fim/test_synchronization/test_sync_enabled_win32.py index 5d25c9f10b..280a2c6045 100644 --- a/tests/integration/test_fim/test_synchronization/test_sync_enabled_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_sync_enabled_win32.py @@ -58,15 +58,18 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params, callback_detect_integrity_event +from wazuh_testing import global_parameters, DATA, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor -from wazuh_testing.wazuh_variables import DATA -from wazuh_testing.fim_module.fim_variables import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, - YAML_CONF_SYNC_WIN32, TEST_DIRECTORIES, TEST_REGISTRIES, - SYNCHRONIZATION_ENABLED, SYNCHRONIZATION_REGISTRY_ENABLED) +from wazuh_testing.modules.fim.utils import generate_params +from wazuh_testing.modules.fim import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, + YAML_CONF_SYNC_WIN32, TEST_DIRECTORIES, TEST_REGISTRIES, + SYNCHRONIZATION_ENABLED, SYNCHRONIZATION_REGISTRY_ENABLED) +from wazuh_testing.modules.fim.event_monitor import (callback_detect_registry_integrity_event, + callback_detect_file_integrity_event) + # Marks pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] @@ -95,16 +98,21 @@ # fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" return request.param -# Tests +@pytest.fixture(scope='module') +def create_a_file(get_configuration): + """Create a file previous to restart syscheckd""" + create_file(REGULAR, test_directories[0], 'testfile') -def test_sync_disabled(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start_sync_disabled): + +# Tests +def test_sync_enabled(get_configuration, configure_environment, create_a_file, restart_syscheckd, + wait_for_fim_start_sync): ''' description: Check if the 'wazuh-syscheckd' daemon uses the value of the 'enabled' tag to start/stop the file/registry synchronization. For this purpose, the test will monitor a directory/key. @@ -122,10 +130,13 @@ def test_sync_disabled(get_configuration, configure_environment, restart_syschec - configure_environment: type: fixture brief: Configure a custom environment for testing. + - create_a_file: + type: fixture + brief: It creates a file. It verifies that also appear the files created when is enabled. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start_sync_disabled: + - wait_for_fim_start_sync: type: fixture brief: Wait for end of initial FIM scan. @@ -142,18 +153,15 @@ def test_sync_disabled(get_configuration, configure_environment, restart_syschec tags: - scheduled - - time_travel - - realtime - - who_data ''' # The file synchronization event should be triggered event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, - callback=callback_detect_integrity_event, update_position=True).result() + callback=callback_detect_file_integrity_event, update_position=True).result() - assert event['component'] == 'fim_file', 'Wrong event component' + assert event['component'] == 'fim_file', 'Did not recieve the expected "fim_file" event' # The registry synchronization event should be triggered event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, update_position=True, - callback=callback_detect_integrity_event).result() + callback=callback_detect_registry_integrity_event).result() - assert event['component'] == 'fim_registry', 'Wrong event component' + assert event['component'] == 'fim_registry_key', 'Did not recieve the expected "fim_registry_key" event' diff --git a/tests/integration/test_fim/test_synchronization/test_sync_interval_win32.py b/tests/integration/test_fim/test_synchronization/test_sync_interval_win32.py index 60ce6ff030..80a41801f1 100644 --- a/tests/integration/test_fim/test_synchronization/test_sync_interval_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_sync_interval_win32.py @@ -155,7 +155,7 @@ def test_sync_interval(get_configuration, configure_environment, restart_syschec # This should fail as we are only advancing half the time needed for synchronization to occur except TimeoutError: - pytest.xfail("Expected fail due to issue: https://github.com/wazuh/wazuh-qa/issues/947 ") + pytest.skip("Expected fail due to issue: https://github.com/wazuh/wazuh-qa/issues/947 ") check_time_travel(True, interval=interval / 2) try: diff --git a/tests/integration/test_fim/test_synchronization/test_sync_overlap.py b/tests/integration/test_fim/test_synchronization/test_sync_overlap.py new file mode 100644 index 0000000000..97ffbd0423 --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/test_sync_overlap.py @@ -0,0 +1,208 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: Check if the 'wazuh-syscheckd' daemon is performing a synchronization at the intervals specified in the + configuration, using the 'interval' tag, if a new synchronization is fired, and the last sync message has been + recieved in less time than 'response_timeout, the sync interval is doubled. + The new value for interval cannot be higher than max_interval option. After a new sync interval is tried and the + last message was recieved in a time that is higher than response_timeout, the sync interval value is returned to + the configured value. + +components: + - fim + +suite: synchronization + +targets: + - agent + - manager + +daemons: + - wazuh-syscheckd + +os_platform: + - linux + - windows + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + - Windows Server 2019 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html#synchronization + +pytest_args: + - fim_mode: + scheduled: monitoring is done at a preconfigured interval. + realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. + whodata: Implies real-time monitoring but adding the 'who-data' information. + - tier: + 0: Basic functionalities and quick to perform. + 1: Functionalities of medium complexity. + 2: Advanced functionalities and are slow to perform. + +tags: + - fim_synchronization +''' +import os +import pytest + +from wazuh_testing import global_parameters +from wazuh_testing.tools import LOG_FILE_PATH, configuration +from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback +from wazuh_testing.modules import TIER1, AGENT, SERVER +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS, MONITORED_DIR_1 +from wazuh_testing.modules.fim.event_monitor import (callback_detect_synchronization, CB_INVALID_CONFIG_VALUE, + ERR_MSG_INVALID_CONFIG_VALUE, ERR_MSG_FIM_SYNC_NOT_DETECTED, + CB_SYNC_SKIPPED, ERR_MSG_SYNC_SKIPPED_EVENT, + CB_SYNC_INTERVAL_RESET, ERR_MSG_SYNC_INTERVAL_RESET_EVENT) + +# Marks +pytestmark = [AGENT, SERVER, TIER1] + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_sync_overlap.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_sync_overlap.yaml') + +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = configuration.get_test_cases_data(test_cases_path) +# This assigns the monitored dir during runtime depending on the OS, cannot be added to yaml +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['MONITORED_DIR'] = MONITORED_DIR_1 +configurations = configuration.load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) + +# Variables +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS + + +# Tests +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +@pytest.mark.parametrize('files_number', [configuration_metadata[0]['files']]) +def test_sync_overlap(configuration, metadata, set_wazuh_configuration, configure_local_internal_options_function, + create_files_in_folder, restart_syscheck_function, wait_fim_start): + ''' + description: Check if the 'wazuh-syscheckd' daemon is performing a synchronization at the interval specified in the + configuration, using the 'interval' tag, if a new synchronization is fired, and the last sync message + has been recieved in less time than 'response_timeout, the sync interval is doubled. + The new value for interval cannot be higher than max_interval option. After a new sync interval is + tried and the last message was recieved in a time that is higher than response_timeout, the sync + interval value is returned to the configured value. + + test_phases: + - Create a folder with a number of files inside. + - Restart syscheckd. + - Check that a sync interval started. + - Check that next sync is skipped and interval value is doubled + - Check that interval value is returned to configured value after successful sync + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - set_wazuh_configuration: + type: fixture + brief: Set ossec.conf configuration. + - configure_local_internal_options_function: + type: fixture + brief: Set local_internal_options.conf file. + - create_files_in_folder: + type: fixture + brief: create files in monitored folder, and delete them after the test. + - restart_syscheck_function: + type: fixture + brief: restart syscheckd daemon, and truncate the ossec.log. + - wait_for_fim_start_function: + type: fixture + brief: check that the starting fim scan is detected. + + assertions: + - Verify that the new value for interval when doubled is equal or lower to max_interval. + + input_description: A test case (sync_interval) is contained in external YAML file (cases_sync_overlap.yaml) which + includes configuration settings for the 'wazuh-syscheckd' daemon. That is combined with + the interval periods and the testing directory to be monitored defined in this module. + + expected_output: + - r'Initializing FIM Integrity Synchronization check' + - r"*Sync still in progress. Skipped next sync and increased interval.*'(\\d+)s'" + - r".*Previous sync was successful. Sync interval is reset to: '(\\d+)s'" + + tags: + - scheduled + ''' + + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + # If config is invalid, check that invalid config value message appers + if metadata['response_timeout'] == 'invalid' or metadata['max_interval'] == 'invalid': + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=generate_monitoring_callback(CB_INVALID_CONFIG_VALUE), + error_message=ERR_MSG_INVALID_CONFIG_VALUE, + update_position=True).result() + + # Wait for new sync + wazuh_log_monitor.start(timeout=global_parameters.default_timeout, callback=callback_detect_synchronization, + error_message=ERR_MSG_FIM_SYNC_NOT_DETECTED, update_position=True).result() + + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + # Check if response_timeout has elapsed, and sync is still running, the sync interval is doubled + interval = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*5, + callback=generate_monitoring_callback(CB_SYNC_SKIPPED), + accum_results=metadata['doubled_times'], + error_message=ERR_MSG_SYNC_SKIPPED_EVENT, update_position=True).result() + + if metadata['doubled_times'] > 1: + new_interval = interval[-1] + else: + new_interval = interval + + # Check interval when doubled is not higher than max interval, if max_interval is higher than configured interval + # Check interval when doubled is equal than configured interval, if max_interval is lower than configured interval + if metadata['max_interval'] != 'invalid': + if not metadata['lower']: + assert int(new_interval) <= int(metadata['max_interval']), f"Invalid value for interval: {new_interval},\ + cannot be more than MAX_INTERVAL:\ + {metadata['max_interval']}" + else: + assert int(new_interval) <= int(metadata['interval']), f"Invalid value for interval {new_interval}, cannot\ + be more than interval: {metadata['interval']}" + + # Check when sync ends sync_interval is returned to normal after response_timeout since last message. + if not metadata['lower']: + result = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*10, + callback=generate_monitoring_callback(CB_SYNC_INTERVAL_RESET), + error_message=ERR_MSG_SYNC_INTERVAL_RESET_EVENT, + update_position=True).result() + + assert int(result) == int(metadata['interval']), f"Invalid value for interval: {result}, it should be reset to\ + interval: {metadata['interval']}" diff --git a/tests/integration/test_fim/test_synchronization/test_sync_registry_enabled_win32.py b/tests/integration/test_fim/test_synchronization/test_sync_registry_enabled_win32.py index 49c33e52d5..c566cbfaeb 100644 --- a/tests/integration/test_fim/test_synchronization/test_sync_registry_enabled_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_sync_registry_enabled_win32.py @@ -58,18 +58,18 @@ import os import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, generate_params +from wazuh_testing import global_parameters, DATA, LOG_FILE_PATH from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor, generate_monitoring_callback -from wazuh_testing.wazuh_variables import DATA -from wazuh_testing.fim_module.fim_variables import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, - YAML_CONF_SYNC_WIN32, TEST_DIRECTORIES, TEST_REGISTRIES, - SYNCHRONIZATION_ENABLED, CB_INTEGRITY_CONTROL_MESSAGE, - SYNCHRONIZATION_REGISTRY_ENABLED) -# Marks +from wazuh_testing.modules.fim import (TEST_DIR_1, WINDOWS_HKEY_LOCAL_MACHINE, MONITORED_KEY, YAML_CONF_SYNC_WIN32, + TEST_DIRECTORIES, TEST_REGISTRIES, SYNCHRONIZATION_ENABLED, + SYNCHRONIZATION_REGISTRY_ENABLED) +from wazuh_testing.modules.fim.event_monitor import CB_INTEGRITY_CONTROL_MESSAGE +from wazuh_testing.modules.fim.utils import generate_params + +# Marks pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # variables @@ -105,7 +105,7 @@ def get_configuration(request): # Tests -def test_sync_disabled(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start_sync_disabled): +def test_sync_disabled(get_configuration, configure_environment, restart_syscheckd, wait_for_fim_start_sync): ''' description: Check if the 'wazuh-syscheckd' daemon uses the value of the 'enabled' tag to start/stop the file/registry synchronization. For this purpose, the test will monitor a directory/key. @@ -126,7 +126,7 @@ def test_sync_disabled(get_configuration, configure_environment, restart_syschec - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. - - wait_for_fim_start_sync_disabled: + - wait_for_fim_start_sync: type: fixture brief: Wait for end of initial FIM scan. assertions: diff --git a/tests/integration/test_fim/test_synchronization/test_sync_time.py b/tests/integration/test_fim/test_synchronization/test_sync_time.py new file mode 100644 index 0000000000..4a424fc518 --- /dev/null +++ b/tests/integration/test_fim/test_synchronization/test_sync_time.py @@ -0,0 +1,174 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: Check when the 'wazuh-syscheckd' daemon is performing a synchronization, a normal synchronization will end +before the configured `interval` and `max_interval`. + +components: + - fim + +suite: synchronization + +targets: + - agent + - manager + +daemons: + - wazuh-syscheckd + +os_platform: + - linux + - windows + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + - Windows Server 2019 + + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html#synchronization + +pytest_args: + - fim_mode: + scheduled: monitoring is done at a preconfigured interval. + realtime: Enable real-time monitoring on Linux (using the 'inotify' system calls) and Windows systems. + whodata: Implies real-time monitoring but adding the 'who-data' information. + - tier: + 0: Basic functionalities and quick to perform. + 1: Functionalities of medium complexity. + 2: Advanced functionalities and are slow to perform. + +tags: + - fim_synchronization +''' +import os +import pytest + + +from wazuh_testing import global_parameters +from wazuh_testing.tools import LOG_FILE_PATH, configuration +from wazuh_testing.tools.monitoring import FileMonitor +from wazuh_testing.modules import TIER1, AGENT, SERVER +from wazuh_testing.modules.fim import MONITORED_DIR_1, FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS +from wazuh_testing.modules.fim import event_monitor as evm + +# Marks +pytestmark = [AGENT, SERVER, TIER1] + +local_internal_options = FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Configuration and cases data +test_cases_path = os.path.join(TEST_CASES_PATH, 'cases_sync_time.yaml') +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_sync_time.yaml') + +# Test configurations +configuration_parameters, configuration_metadata, test_case_ids = configuration.get_test_cases_data(test_cases_path) +# This assigns the monitored_dir during runtime depending on the OS, cannot be added to yaml +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['MONITORED_DIR'] = MONITORED_DIR_1 +configurations = configuration.load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) + +# Variables +wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + +# Tests +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=test_case_ids) +@pytest.mark.parametrize('files_number', [configuration_metadata[0]['files']]) +def test_sync_time(configuration, metadata, set_wazuh_configuration, configure_local_internal_options_function, + create_files_in_folder, restart_syscheck_function, wait_fim_start): + ''' + description: Check when the 'wazuh-syscheckd' daemon is performing a synchronization, a normal synchronization + will end before the configured `interval` and `max_interval`. + + test_phases: + - Create a folder with a number of files inside. + - Restart syscheckd. + - Check that a sync interval started, and get the time it starts + - Get all the integrity state events time. + - Assert that the time it took for the sync to complete was less than the configured interval and max_interval. + + wazuh_min_version: 4.5.0 + + tier: 2 + + parameters: + - configuration: + type: dict + brief: Configuration values for ossec.conf. + - metadata: + type: dict + brief: Test case data. + - set_wazuh_configuration: + type: fixture + brief: Set ossec.conf configuration. + - configure_local_internal_options_function: + type: fixture + brief: Set local_internal_options.conf file. + - create_files_in_folder: + type: fixture + brief: create files in monitored folder, and delete them after the test. + - restart_syscheck_function: + type: fixture + brief: restart syscheckd daemon, and truncate the ossec.log. + - wait_for_fim_start_function: + type: fixture + brief: check that the starting fim scan is detected. + + assertions: + - Assert sync time delta is smaller than interval + - Assert sync time delta is smaller than max_interval + + input_description: A test case is contained in external YAML file (cases_sync_interval.yaml) which includes + configuration settings for the 'wazuh-syscheckd' daemon. That is combined with the interval + periods and the testing directory to be monitored defined in this module. + + expected_output: + - r'Initializing FIM Integrity Synchronization check' + - r".*Executing FIM sync" + - r".*Sending integrity control message.*" + + tags: + - scheduled + ''' + + wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) + + # Wait for new sync and get start time + sync_time = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=evm.callback_sync_start_time, + error_message=evm.ERR_MSG_FIM_SYNC_NOT_DETECTED, update_position=True).result() + + # Get the time of all the sync state events for the created files + results = wazuh_log_monitor.start(timeout=global_parameters.default_timeout, + callback=evm.callback_state_event_time, accum_results=3, + error_message=evm.ERR_MSG_FIM_SYNC_NOT_DETECTED, update_position=True).result() + + # Calculate timedelta between start of sync and last message. + # Add 1 second to take into account the first second from the scan + delta = (results[-1] - sync_time).total_seconds() + 1 + + # Assert that sync took less time that interval and max_interval + assert delta <= metadata['interval'], f"Error: Sync took longer than interval: {metadata['interval']}" + assert delta <= metadata['max_interval'], f"Error: Sync took longer than max_interval: {metadata['max_interval']}" diff --git a/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_scan.py b/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_scan.py index 676ac6fce3..6702e6dda0 100644 --- a/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_scan.py +++ b/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_scan.py @@ -60,7 +60,7 @@ import pytest from wazuh_testing import global_parameters from wazuh_testing.fim import LOG_FILE_PATH, generate_params, create_file, REGULAR, \ - callback_detect_event, callback_real_time_whodata_started + callback_detect_event, callback_real_time_whodata_started, callback_detect_synchronization from wazuh_testing.tools import PREFIX from wazuh_testing.tools.configuration import load_wazuh_configurations from wazuh_testing.tools.monitoring import FileMonitor @@ -107,14 +107,8 @@ def extra_configuration_before_yield(): create_file(REGULAR, testdir, file, content='Sample content') -def callback_integrity_synchronization_check(line): - if 'Initializing FIM Integrity Synchronization check' in line: - return line - return None - - def callback_integrity_or_whodata(line): - if callback_integrity_synchronization_check(line): + if callback_detect_synchronization(line): return 1 elif callback_real_time_whodata_started(line): return 2 @@ -124,7 +118,8 @@ def callback_integrity_or_whodata(line): @pytest.mark.parametrize('tags_to_apply', [ {'synchronize_events_conf'} ]) -def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure_environment, restart_syscheckd): +def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure_environment, install_audit, + restart_syscheckd): ''' description: Check if the 'wazuh-syscheckd' daemon detects events while the synchronization is performed simultaneously. For this purpose, the test will monitor a testing directory. Then, it @@ -145,6 +140,9 @@ def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure - configure_environment: type: fixture brief: Configure a custom environment for testing. + - install_audit: + type: fixture + brief: install audit to check whodata. - restart_syscheckd: type: fixture brief: Clear the 'ossec.log' file and start a new monitor. @@ -176,20 +174,20 @@ def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure value_1 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 2, callback=callback_integrity_or_whodata, error_message='Did not receive expected "File integrity monitoring ' - 'real-time Whodata engine started" or "Initializing ' - 'FIM Integrity Synchronization check"').result() + 'real-time Whodata engine started" or ' + '"Executing FIM sync"').result() value_2 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 2, callback=callback_integrity_or_whodata, error_message='Did not receive expected "File integrity monitoring ' - 'real-time Whodata engine started" or "Initializing FIM ' - 'Integrity Synchronization check"').result() + 'real-time Whodata engine started" or ' + '"Executing FIM sync"').result() assert value_1 != value_2, "callback_integrity_or_whodata detected the same message twice" else: # Check the integrity scan has begun wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 3, - callback=callback_integrity_synchronization_check, + callback=callback_detect_synchronization, error_message='Did not receive expected ' '"Initializing FIM Integrity Synchronization check" event') diff --git a/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_win32.py b/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_win32.py index e7e4064337..f150053880 100644 --- a/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_win32.py +++ b/tests/integration/test_fim/test_synchronization/test_synchronize_integrity_win32.py @@ -54,34 +54,37 @@ - fim_synchronization ''' import os -from datetime import timedelta - +import time import pytest -from wazuh_testing import global_parameters -from wazuh_testing.fim import LOG_FILE_PATH, create_registry, generate_params, \ - create_file, modify_registry_value, REGULAR, callback_detect_event, callback_real_time_whodata_started, \ - KEY_WOW64_64KEY, registry_parser, REG_SZ + +from wazuh_testing import global_parameters, LOG_FILE_PATH, REGULAR from wazuh_testing.tools import PREFIX -from wazuh_testing.tools.configuration import load_wazuh_configurations, check_apply_test +from wazuh_testing.tools.configuration import load_wazuh_configurations +from wazuh_testing.tools.file import create_file from wazuh_testing.tools.monitoring import FileMonitor -from wazuh_testing.tools.time import TimeMachine - +from wazuh_testing.modules import TIER2, WINDOWS +from wazuh_testing.modules.fim import (WINDOWS_HKEY_LOCAL_MACHINE, KEY_WOW64_64KEY, registry_parser, + REG_SZ, MONITORED_KEY) +from wazuh_testing.modules.fim.event_monitor import (callback_detect_event, callback_real_time_whodata_started, + callback_detect_synchronization, ERR_MSG_FIM_EVENT_NOT_RECIEVED, + ERR_MSG_INTEGRITY_OR_WHODATA_NOT_STARTED, + ERR_MSG_INTEGRITY_CHECK_EVENT) +from wazuh_testing.modules.fim.utils import create_registry, generate_params, modify_registry_value +from wazuh_testing.modules.fim import FIM_DEFAULT_LOCAL_INTERNAL_OPTIONS as local_internal_options # Marks +pytestmark = [WINDOWS, TIER2] -pytestmark = [pytest.mark.win32, pytest.mark.tier(level=1)] # variables -key = "HKEY_LOCAL_MACHINE" -subkey = "SOFTWARE\\test_key" +subkey = MONITORED_KEY test_directories = [os.path.join(PREFIX, 'testdir1'), os.path.join(PREFIX, 'testdir2')] -test_regs = [os.path.join(key, subkey)] +test_regs = [os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, subkey)] directory_str = ','.join(test_directories) test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') configurations_path = os.path.join(test_data_path, 'wazuh_conf_integrity_scan_win32.yaml') -testdir1, testdir2 = test_directories conf_params = {'TEST_DIRECTORIES': test_directories, - 'TEST_REGS': os.path.join(key, subkey)} + 'TEST_REGS': os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, subkey)} file_list = [] subkey_list = [] @@ -99,7 +102,6 @@ # fixtures - @pytest.fixture(scope='module', params=configurations) def get_configuration(request): """Get configurations from the module.""" @@ -111,27 +113,19 @@ def extra_configuration_before_yield(): for testdir in test_directories: for file, reg in zip(file_list, subkey_list): create_file(REGULAR, testdir, file, content='Sample content') - create_registry(registry_parser[key], os.path.join(key, 'SOFTWARE', reg), KEY_WOW64_64KEY) - - -def callback_integrity_synchronization_check(line): - if 'Initializing FIM Integrity Synchronization check' in line: - return line - return None + create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], os.path.join(subkey, reg), KEY_WOW64_64KEY) def callback_integrity_or_whodata(line): - if callback_integrity_synchronization_check(line): + if callback_detect_synchronization(line): return 1 elif callback_real_time_whodata_started(line): return 2 # tests -@pytest.mark.parametrize('tags_to_apply', [ - {'synchronize_events_conf'} -]) -def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure_environment, restart_syscheckd): +def test_events_while_integrity_scan(get_configuration, configure_environment, restart_syscheckd, + configure_local_internal_options_module): ''' description: Check if the 'wazuh-syscheckd' daemon detects events while the synchronization is performed simultaneously. For this purpose, the test will monitor a testing directory and registry key. @@ -176,47 +170,39 @@ def test_events_while_integrity_scan(tags_to_apply, get_configuration, configure - realtime - who_data ''' - check_apply_test(tags_to_apply, get_configuration['tags']) - folder = testdir1 if get_configuration['metadata']['fim_mode'] == 'realtime' else testdir2 - key_h = create_registry(registry_parser[key], subkey, KEY_WOW64_64KEY) + folder = test_directories[0] if get_configuration['metadata']['fim_mode'] == 'realtime' else test_directories[1] + key_h = create_registry(registry_parser[WINDOWS_HKEY_LOCAL_MACHINE], subkey, KEY_WOW64_64KEY) # Wait for whodata to start and the synchronization check. Since they are different threads, we cannot expect # them to come in order every time if get_configuration['metadata']['fim_mode'] == 'whodata': - value_1 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 2, + value_1 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 3, callback=callback_integrity_or_whodata, - error_message='Did not receive expected "File integrity monitoring ' - 'real-time Whodata engine started" or "Initializing ' - 'FIM Integrity Synchronization check"').result() + error_message=ERR_MSG_INTEGRITY_OR_WHODATA_NOT_STARTED).result() - value_2 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 2, + value_2 = wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 3, callback=callback_integrity_or_whodata, - error_message='Did not receive expected "File integrity monitoring ' - 'real-time Whodata engine started" or "Initializing FIM ' - 'Integrity Synchronization check"').result() + error_message=ERR_MSG_INTEGRITY_OR_WHODATA_NOT_STARTED).result() assert value_1 != value_2, "callback_integrity_or_whodata detected the same message twice" else: # Check the integrity scan has begun wazuh_log_monitor.start(timeout=global_parameters.default_timeout * 3, - callback=callback_integrity_synchronization_check, - error_message='Did not receive expected ' - '"Initializing FIM Integrity Synchronization check" event') + callback=callback_detect_synchronization, + error_message=ERR_MSG_INTEGRITY_CHECK_EVENT) # Create a file and a registry value. Assert syscheckd detects it while doing the integrity scan file_name = 'file' create_file(REGULAR, folder, file_name, content='') modify_registry_value(key_h, "test_value", REG_SZ, 'added') - sending_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*3, callback=callback_detect_event, - error_message='Did not receive expected ' - '"Sending FIM event: ..." event').result() + sending_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*5, callback=callback_detect_event, + error_message=ERR_MSG_FIM_EVENT_NOT_RECIEVED).result() assert sending_event['data']['path'] == os.path.join(folder, file_name) - TimeMachine.travel_to_future(timedelta(hours=13)) - sending_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*3, callback=callback_detect_event, - error_message='Did not receive expected ' - '"Sending FIM event: ..." event').result() - assert sending_event['data']['path'] == os.path.join(key, subkey) + time.sleep(global_parameters.default_timeout) + sending_event = wazuh_log_monitor.start(timeout=global_parameters.default_timeout*5, callback=callback_detect_event, + error_message=ERR_MSG_FIM_EVENT_NOT_RECIEVED).result() + assert sending_event['data']['path'] == os.path.join(WINDOWS_HKEY_LOCAL_MACHINE, subkey) assert sending_event['data']['arch'] == '[x64]' diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_ignore_multiple_regex.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_ignore_multiple_regex.yaml new file mode 100644 index 0000000000..64228cb91e --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_ignore_multiple_regex.yaml @@ -0,0 +1,33 @@ +- sections: + - section: localfile + elements: + - log_format: + value: syslog + - location: + value: LOCATION + - ignore: + value: REGEX_1 + - ignore: + value: REGEX_2 + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: syscheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_ignore_regex_values.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_ignore_regex_values.yaml new file mode 100644 index 0000000000..3c8b8bcd1e --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_ignore_regex_values.yaml @@ -0,0 +1,37 @@ +- sections: + - section: localfile + elements: + - log_format: + value: syslog + - location: + value: LOCATION + - restrict: + value: RESTRICT_REGEX + attributes: + - type: RESTRICT_TYPE + - ignore: + value: IGNORE_REGEX + attributes: + - type: IGNORE_TYPE + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: syscheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_multiple_regex.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_multiple_regex.yaml new file mode 100644 index 0000000000..8202777005 --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/configuration_template/configuration_restrict_multiple_regex.yaml @@ -0,0 +1,33 @@ +- sections: + - section: localfile + elements: + - log_format: + value: syslog + - location: + value: LOCATION + - restrict: + value: REGEX_1 + - restrict: + value: REGEX_2 + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: syscheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: syscollector + elements: + - disabled: + value: 'yes' diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_ignore_multiple_regex.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_ignore_multiple_regex.yaml new file mode 100644 index 0000000000..2b8ae0542d --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_ignore_multiple_regex.yaml @@ -0,0 +1,43 @@ +- name: Log match - Two ignore tags - Match first tag + description: Test two Ignore tags, with matching log first tag + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex1" + matches: regex1 + +- name: Log match - Two ignore tags - Match both tags + description: Test two Ignore tags, with matching log both tags + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex1 regex2" + matches: regex1 regex2 + +- name: Log match - Two ignore tags - Match second tag + description: Test two Ignore tags, with matching log second tag + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex2" + matches: regex2 + +- name: No match - Two ignore tags + description: Test two Ignore tags, with no matches + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log does not matches" + matches: no match diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_ignore_regex_values.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_ignore_regex_values.yaml new file mode 100644 index 0000000000..ec5f28a86c --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_ignore_regex_values.yaml @@ -0,0 +1,83 @@ +# Test PCRE2 +- name: Matches with restrict tag - Ignore and restrict tags with PCRE2 regex + description: Test Restrict + Ignore tags both with PCRE2 regex. Log matches restrict + configuration_parameters: + RESTRICT_REGEX: .*restrict + IGNORE_REGEX: .*ignore + RESTRICT_TYPE: PCRE2 + IGNORE_TYPE: PCRE2 + metadata: + restrict_regex: .*restrict + ignore_regex: .*ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches restrict" + matches: restrict + +# Test osregex +- name: Matches with ignore tag - Ignore and restrict tags with osregex regex + description: Test Restrict + Ignore tags both with osregex regex. Log matches ignore + configuration_parameters: + RESTRICT_REGEX: \.restrict + IGNORE_REGEX: \.ignore + RESTRICT_TYPE: osregex + IGNORE_TYPE: osregex + metadata: + restrict_regex: \\.restrict + ignore_regex: \\.ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches ignore" + matches: ignore + +# Test osmatch +- name: Log match - Ignore and restrict tags with osmatch regex + description: Test Restrict + Ignore tags both with osmatch regex. Log matches both + configuration_parameters: + RESTRICT_REGEX: restrict$ + IGNORE_REGEX: ignore + RESTRICT_TYPE: osmatch + IGNORE_TYPE: osmatch + metadata: + restrict_regex: restrict\$ + ignore_regex: ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches ignore restrict" + matches: ignore - restrict + +# Test pcre2 + osregex +- name: Matches with restrict tag - Ignore and restrict tags with pcre2 and osregex regex + description: Test Restrict + Ignore tags (pcre2+osregex). Log matches restrict + configuration_parameters: + RESTRICT_REGEX: .*restrict + IGNORE_REGEX: \.ignore + RESTRICT_TYPE: pcre2 + IGNORE_TYPE: osregex + metadata: + restrict_regex: .*restrict + ignore_regex: \\.ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches restrict" + matches: restrict + +# Test pcre2 + osmatch +- name: Matches with ignore tag - Ignore and restrict tags with pcre2 and osmatch regex + description: Test Restrict + Ignore tags (pcre2+osmatch). Log matches ignore + configuration_parameters: + RESTRICT_REGEX: .*restrict + IGNORE_REGEX: ignore + RESTRICT_TYPE: pcre2 + IGNORE_TYPE: osmatch + metadata: + restrict_regex: .*restrict + ignore_regex: ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches ignore" + matches: ignore + +# Test osmatch + osregex +- name: Log match - Ignore and restrict tags with osregex and osmatch regex + description: Test Restrict + Ignore tags (osregex+osmatch). Log matches both + configuration_parameters: + RESTRICT_REGEX: \.restrict + IGNORE_REGEX: ignore + RESTRICT_TYPE: osregex + IGNORE_TYPE: osmatch + metadata: + restrict_regex: \\.restrict + ignore_regex: ignore + log_sample: "Nov 10 12:19:04 localhost sshd: log matches ignore restrict" + matches: ignore - restrict diff --git a/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_multiple_regex.yaml b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_multiple_regex.yaml new file mode 100644 index 0000000000..0c12dbade4 --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/data/test_cases/cases_restrict_multiple_regex.yaml @@ -0,0 +1,43 @@ +- name: Log match - Two Restrict tags - Match first tag + description: Test two Restrict tags, with matching first tag + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex1" + matches: regex1 + +- name: Log match - Two Restrict tags - Match both tags + description: Test twoRestrict tags, with log matching both tags + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex1 regex2" + matches: regex1 regex2 + +- name: Log match - Two Restrict tags - Match second tag + description: Test two Restrict tags, with matching first tag + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log matches regex2" + matches: regex2 + +- name: No match - Two Restrict tags + description: Test two Restrict, log does not match + configuration_parameters: + REGEX_1: .+regex1 + REGEX_2: .+regex2 + metadata: + regex1: .+regex1 + regex2: .+regex2 + log_sample: "Nov 10 12:19:04 localhost sshd: log does not matches" + matches: no match diff --git a/tests/integration/test_logcollector/test_log_filter_options/test_ignore_regex.py b/tests/integration/test_logcollector/test_log_filter_options/test_ignore_regex.py new file mode 100644 index 0000000000..f83b572aed --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/test_ignore_regex.py @@ -0,0 +1,177 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: The 'wazuh-logcollector' daemon monitors configured files and commands for new log messages. + Specifically, these tests check the behavior of the restrict and ignore options, that allow + users to configure regex patterns that limit if a log will be sent to analysis or will be ignored. + The restrict causes any log that does not match the regex to be ignored, conversely, the 'ignore' option + causes logs that match the regex to be ignored and not be sent for analysis. + +components: + - logcollector + +suite: log_filter_options + +targets: + - agent + - manager + +daemons: + - wazuh-logcollector + +os_platform: + - linux + - windows + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + - Windows 10 + - Windows Server 2019 + - Windows Server 2016 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/log-data-collection/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/localfile.html + - https://documentation.wazuh.com/current/user-manual/reference/statistics-files/wazuh-logcollector-state.html + - https://documentation.wazuh.com/current/user-manual/reference/internal-options.html#logcollector + +tags: + - logcollector_options +''' +import os +import re +import sys +import pytest +from wazuh_testing.tools import PREFIX +from wazuh_testing.tools.local_actions import run_local_command_returning_output +from wazuh_testing.tools.configuration import load_configuration_template, get_test_cases_data +from wazuh_testing.modules.logcollector import event_monitor as evm +from wazuh_testing.modules import logcollector as lc + + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Test configurations and cases data +test_file = os.path.join(PREFIX, 'test') + +# --------------------------------TEST_IGNORE_MULTIPLE_REGEX------------------------------------------- +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_ignore_multiple_regex.yaml') +cases_path = os.path.join(TEST_CASES_PATH, 'cases_ignore_multiple_regex.yaml') + +configuration_parameters, configuration_metadata, case_ids = get_test_cases_data(cases_path) +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['LOCATION'] = test_file +configurations = load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) + +prefix = lc.LOG_COLLECTOR_PREFIX +local_internal_options = lc.LOGCOLLECTOR_DEFAULT_LOCAL_INTERNAL_OPTIONS + + +# Tests +@pytest.mark.tier(level=1) +@pytest.mark.parametrize('new_file_path,', [test_file], ids=['']) +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=case_ids) +def test_ignore_multiple_regex(configuration, metadata, new_file_path, create_file, truncate_monitored_files, + set_wazuh_configuration, configure_local_internal_options_function, + restart_wazuh_function): + ''' + description: Check if logcollector behavior when two ignore tags are added. + + test_phases: + - Setup: + - Create file to monitor logs + - Truncate ossec.log file + - Set ossec.conf and local_internal_options.conf + - Restart the wazuh daemon + - Test: + - Insert the log message. + - Check expected response. + - Teardown: + - Delete the monitored file + - Restore ossec.conf and local_internal_options.conf + - Stop Wazuh + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Wazuh configuration data. Needed for set_wazuh_configuration fixture. + - metadata: + type: dict + brief: Wazuh configuration metadata + - new_file_path: + type: str + brief: path for the log file to be created and deleted after the test. + - create_file: + type: fixture + brief: Create an empty file for logging + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - set_wazuh_configuration: + type: fixture + brief: Set the wazuh configuration according to the configuration data. + - configure_local_internal_options: + type: fixture + brief: Configure the local_internal_options file. + - restart_wazuh_function: + type: fixture + brief: Restart wazuh. + + assertions: + - Check that logcollector is analyzing the log file. + - Check that logs are ignored when they match with configured regex + + input_description: + - The `configuration_ignore_multiple_regex.yaml` file provides the module configuration for this test. + - The `cases_ignore_multiple_regex` file provides the test cases. + + expected_output: + - r".*wazuh-logcollector.*Analizing file: '{file}'.*" + - r".*wazuh-logcollector.*DEBUG: Reading syslog '{message}'.*" + - r".*wazuh-logcollector.*DEBUG: Ignoring the log line '{message}' due to {tag} config: '{regex}'" + ''' + log = metadata['log_sample'] + command = f"echo {log}>> {test_file}" + + file = re.escape(test_file) if sys.platform == 'win32' else test_file + + # Check log file is being analized + evm.check_analyzing_file(file=file, prefix=prefix) + # Insert log + run_local_command_returning_output(command) + + # Check the log is read from the monitored file + evm.check_syslog_message(message=log, prefix=prefix) + + # Check response + if 'regex1' in metadata['matches']: + evm.check_ignore_restrict_message(message=log, regex=metadata['regex1'], tag='ignore', prefix=prefix) + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex2'], tag='ignore', prefix=prefix) + elif metadata['matches'] == 'regex2': + evm.check_ignore_restrict_message(message=log, regex=metadata['regex2'], tag='ignore', prefix=prefix) + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex1'], tag='ignore', prefix=prefix) + else: + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex1'], tag='ignore', prefix=prefix) + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex2'], tag='ignore', prefix=prefix) diff --git a/tests/integration/test_logcollector/test_log_filter_options/test_restrict_ignore_regex.py b/tests/integration/test_logcollector/test_log_filter_options/test_restrict_ignore_regex.py new file mode 100644 index 0000000000..2d9347bb19 --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/test_restrict_ignore_regex.py @@ -0,0 +1,191 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: The 'wazuh-logcollector' daemon monitors configured files and commands for new log messages. + Specifically, these tests check the behavior of the restrict and ignore options, that allow + users to configure regex patterns that limit if a log will be sent to analysis or will be ignored. + The restrict causes any log that does not match the regex to be ignored, conversely, the 'ignore' option + causes logs that match the regex to be ignored and not be sent for analysis. + +components: + - logcollector + +suite: log_filter_options + +targets: + - agent + - manager + +daemons: + - wazuh-logcollector + +os_platform: + - linux + - windows + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + - Windows 10 + - Windows Server 2019 + - Windows Server 2016 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/log-data-collection/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/localfile.html + - https://documentation.wazuh.com/current/user-manual/reference/statistics-files/wazuh-logcollector-state.html + - https://documentation.wazuh.com/current/user-manual/reference/internal-options.html#logcollector + +tags: + - logcollector_options +''' +import os +import sys +import re +import pytest + +from wazuh_testing.tools import PREFIX +from wazuh_testing.tools.local_actions import run_local_command_returning_output +from wazuh_testing.tools.configuration import load_configuration_template, get_test_cases_data +from wazuh_testing.tools.services import get_service +from wazuh_testing.modules.logcollector import event_monitor as evm +from wazuh_testing.modules import logcollector as lc + + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Configuration and cases data +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_restrict_ignore_regex_values.yaml') +cases_path = os.path.join(TEST_CASES_PATH, 'cases_restrict_ignore_regex_values.yaml') + +# Test configurations +test_file = os.path.join(PREFIX, 'test') + +configuration_parameters, configuration_metadata, case_ids = get_test_cases_data(cases_path) +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['LOCATION'] = test_file +configurations = load_configuration_template(configurations_path, configuration_parameters, configuration_metadata) +prefix = lc.LOG_COLLECTOR_PREFIX +local_internal_options = lc.LOGCOLLECTOR_DEFAULT_LOCAL_INTERNAL_OPTIONS + + +# Tests +@pytest.mark.tier(level=1) +@pytest.mark.parametrize('new_file_path,', [test_file], ids=['']) +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=case_ids) +def test_restrict_ignore_regex_values(configuration, metadata, new_file_path, create_file, truncate_monitored_files, + set_wazuh_configuration, configure_local_internal_options_function, + restart_wazuh_function): + ''' + description: Check if logcollector reads or ignores a log according to a regex configured in the restrict and + restrict tag tag for a given log file, with each configured value for the restrict 'type' attribute + value configured. + + test_phases: + - Setup: + - Create file to monitor logs + - Truncate ossec.log file + - Set ossec.conf and local_internal_options.conf + - Restart the wazuh daemon + - Test: + - Insert the log message. + - Check expected response. + - Teardown: + - Delete the monitored file + - Restore ossec.conf and local_internal_options.conf + - Stop Wazuh + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Wazuh configuration data. Needed for set_wazuh_configuration fixture. + - metadata: + type: dict + brief: Wazuh configuration metadata + - new_file_path: + type: str + brief: path for the log file to be created and deleted after the test. + - create_file: + type: fixture + brief: Create an empty file for logging + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - set_wazuh_configuration: + type: fixture + brief: Set the wazuh configuration according to the configuration data. + - configure_local_internal_options: + type: fixture + brief: Configure the local_internal_options file. + - restart_wazuh_function: + type: fixture + brief: Restart wazuh. + + assertions: + - Check that logcollector is analyzing the log file. + - Check that logs are ignored when they do not match with configured regex + + input_description: + - The `configuration_restrict_ignore_regex_values.yaml` file provides the module configuration for this test. + - The `cases_restrict_ignore_regex_values.yaml` file provides the test cases. + + expected_output: + - r".*wazuh-logcollector.*Analizing file: '{file}'.*" + - r".*wazuh-logcollector.*DEBUG: Reading syslog '{message}'.*" + - r".*wazuh-logcollector.*DEBUG: Ignoring the log line '{message}' due to {tag} config: '{regex}'" + ''' + log = metadata['log_sample'] + command = f"echo {log}>> {test_file}" + + file = re.escape(test_file) if sys.platform == 'win32' else test_file + + # Check log file is being analized + evm.check_analyzing_file(file=file, prefix=prefix) + + # Insert log + run_local_command_returning_output(command) + # Check the log is read from the monitored file + evm.check_syslog_message(message=log, prefix=prefix) + + # Check responses + # If it matches with ignore, it should ignore the log due to ignore config + if 'ignore' in metadata['matches']: + evm.check_ignore_restrict_message(message=log, regex=metadata['ignore_regex'], tag='ignore', + prefix=prefix) + if 'restrict' in metadata['matches']: + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['restrict_regex'], tag='restrict', + prefix=prefix) + + # If matches with restrict, it should not be ignored due to restrict config + elif metadata['matches'] == 'restrict': + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['restrict_regex'], tag='restrict', + prefix=prefix) + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['ignore_regex'], tag='ignore', + prefix=prefix) + + # If it matches with None, the log should be ignored due to restrict config and not due to ignore config + else: + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['ignore_regex'], tag='ignore', + prefix=prefix) + evm.check_ignore_restrict_message(message=log, regex=metadata['restrict_regex'], tag='restrict', + prefix=prefix) diff --git a/tests/integration/test_logcollector/test_log_filter_options/test_restrict_regex.py b/tests/integration/test_logcollector/test_log_filter_options/test_restrict_regex.py new file mode 100644 index 0000000000..6aace6b990 --- /dev/null +++ b/tests/integration/test_logcollector/test_log_filter_options/test_restrict_regex.py @@ -0,0 +1,185 @@ +''' +copyright: Copyright (C) 2015-2022, Wazuh Inc. + + Created by Wazuh, Inc. . + + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 + +type: integration + +brief: The 'wazuh-logcollector' daemon monitors configured files and commands for new log messages. + Specifically, these tests check the behavior of the restrict and ignore options, that allow + users to configure regex patterns that limit if a log will be sent to analysis or will be ignored. + The restrict causes any log that does not match the regex to be ignored, conversely, the 'ignore' option + causes logs that match the regex to be ignored and not be sent for analysis. + +components: + - logcollector + +suite: log_filter_options + +targets: + - agent + - manager + +daemons: + - wazuh-logcollector + +os_platform: + - linux + - windows + +os_version: + - Arch Linux + - Amazon Linux 2 + - Amazon Linux 1 + - CentOS 8 + - CentOS 7 + - Debian Buster + - Red Hat 8 + - Ubuntu Focal + - Ubuntu Bionic + - Windows 10 + - Windows Server 2019 + - Windows Server 2016 + +references: + - https://documentation.wazuh.com/current/user-manual/capabilities/log-data-collection/index.html + - https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/localfile.html + - https://documentation.wazuh.com/current/user-manual/reference/statistics-files/wazuh-logcollector-state.html + - https://documentation.wazuh.com/current/user-manual/reference/internal-options.html#logcollector + +tags: + - logcollector_options +''' +import os +import sys +import re +import pytest + +from wazuh_testing.tools import PREFIX +from wazuh_testing.tools.local_actions import run_local_command_returning_output +from wazuh_testing.tools.configuration import load_configuration_template, get_test_cases_data +from wazuh_testing.modules.logcollector import event_monitor as evm +from wazuh_testing.modules import logcollector as lc + + +# Reference paths +TEST_DATA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +CONFIGURATIONS_PATH = os.path.join(TEST_DATA_PATH, 'configuration_template') +TEST_CASES_PATH = os.path.join(TEST_DATA_PATH, 'test_cases') + +# Test configurations and cases data +test_file = os.path.join(PREFIX, 'test') + +# --------------------------------TEST_RESTRICT_MULTIPLE_REGEX------------------------------------------- +configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_restrict_multiple_regex.yaml') +cases_path = os.path.join(TEST_CASES_PATH, 'cases_restrict_multiple_regex.yaml') + +configuration_parameters, configuration_metadata, case_ids = get_test_cases_data(cases_path) +for count, value in enumerate(configuration_parameters): + configuration_parameters[count]['LOCATION'] = test_file +configurations = load_configuration_template(configurations_path, configuration_parameters, + configuration_metadata) +prefix = lc.LOG_COLLECTOR_PREFIX +local_internal_options = lc.LOGCOLLECTOR_DEFAULT_LOCAL_INTERNAL_OPTIONS + + +# Tests +@pytest.mark.tier(level=1) +@pytest.mark.parametrize('new_file_path,', [test_file], ids=['']) +@pytest.mark.parametrize('configuration, metadata', zip(configurations, configuration_metadata), ids=case_ids) +def test_restrict_multiple_regex(configuration, metadata, new_file_path, create_file, truncate_monitored_files, + set_wazuh_configuration, configure_local_internal_options_function, + restart_wazuh_function): + ''' + description: Check if logcollector behavior when two restrict tags are added. + + test_phases: + - Setup: + - Create file to monitor logs + - Truncate ossec.log file + - Set ossec.conf and local_internal_options.conf + - Restart the wazuh daemon + - Test: + - Insert the log message. + - Check expected response. + - Teardown: + - Delete the monitored file + - Restore ossec.conf and local_internal_options.conf + - Stop Wazuh + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Wazuh configuration data. Needed for set_wazuh_configuration fixture. + - metadata: + type: dict + brief: Wazuh configuration metadata + - new_file_path: + type: str + brief: path for the log file to be created and deleted after the test. + - create_file: + type: fixture + brief: Create an empty file for logging + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - set_wazuh_configuration: + type: fixture + brief: Set the wazuh configuration according to the configuration data. + - configure_local_internal_options: + type: fixture + brief: Configure the local_internal_options file. + - restart_wazuh_function: + type: fixture + brief: Restart wazuh. + + assertions: + - Check that logcollector is analyzing the log file. + - Check that logs are ignored when they match with configured regex + + input_description: + - The `configuration_restrict_multiple_regex.yaml` file provides the module configuration for this test. + - The `cases_restrict_multiple_regex` file provides the test cases. + + expected_output: + - r".*wazuh-logcollector.*Analizing file: '{file}'.*" + - r".*wazuh-logcollector.*DEBUG: Reading syslog '{message}'.*" + - r".*wazuh-logcollector.*DEBUG: Ignoring the log line '{message}' due to {tag} config: '{regex}'" + ''' + log = metadata['log_sample'] + command = f"echo {log}>> {test_file}" + + if sys.platform == 'win32': + file = re.escape(test_file) + else: + file = test_file + + # Check log file is being analized + evm.check_analyzing_file(file=file, prefix=prefix) + # Insert log + run_local_command_returning_output(command) + + # Check the log is read from the monitored file + evm.check_syslog_message(message=log, prefix=prefix) + + # Check response + if 'regex1' in metadata['matches']: + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex1'], tag='restrict', + prefix=prefix) + if 'regex2' in metadata['matches']: + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex2'], tag='restrict', + prefix=prefix) + else: + evm.check_ignore_restrict_message(message=log, regex=metadata['regex2'], tag='restrict', prefix=prefix) + elif metadata['matches'] == 'regex2': + evm.check_ignore_restrict_message_not_found(message=log, regex=metadata['regex2'], tag='restrict', + prefix=prefix) + evm.check_ignore_restrict_message(message=log, regex=metadata['regex1'], tag='restrict', prefix=prefix) + else: + evm.check_ignore_restrict_message(message=log, regex=metadata['regex1'], tag='restrict', prefix=prefix) diff --git a/tests/integration/test_logcollector/test_only_future_events/test_only_future_events.py b/tests/integration/test_logcollector/test_only_future_events/test_only_future_events.py index 8c9b5c0441..0e1280b431 100644 --- a/tests/integration/test_logcollector/test_only_future_events/test_only_future_events.py +++ b/tests/integration/test_logcollector/test_only_future_events/test_only_future_events.py @@ -54,6 +54,7 @@ - logcollector_only_future_events ''' import os +import re import tempfile import sys import pytest @@ -80,7 +81,7 @@ temp_dir = tempfile.gettempdir() log_test_path = os.path.join(temp_dir, 'wazuh-testing', 'test.log') -LOG_LINE = 'Jan 1 00:00:00 localhost test[0]: line=' +LOG_LINE = 'Jan 1 00:00:00 localhost test: line=' prefix = LOG_COLLECTOR_PREFIX local_internal_options = {'logcollector.vcheck_files': '5', 'logcollector.debug': '2', 'windows.debug': '2'} @@ -197,8 +198,13 @@ def test_only_future_events(configuration, metadata, set_wazuh_configuration, current_line = 0 log_monitor = setup_log_monitor + if sys.platform == 'win32': + file = re.escape(log_test_path) + else: + file = log_test_path + # Ensure that the file is being analyzed - evm.check_analyzing_file(file_monitor=log_monitor, file=log_test_path, + evm.check_analyzing_file(file_monitor=log_monitor, file=file, error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix) # Add n log lines corresponding to 1KB @@ -208,9 +214,9 @@ def test_only_future_events(configuration, metadata, set_wazuh_configuration, # Check that the last written line has been read by logcollector last_line = current_line + 1 message = f"{LOG_LINE}{last_line}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_10, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_10, escape=False) # Stop logcollector daemon control_service('stop', daemon=LOGCOLLECTOR_DAEMON) @@ -225,36 +231,36 @@ def test_only_future_events(configuration, metadata, set_wazuh_configuration, if metadata['only_future_events'] == 'no': # Check first log line message = f"{LOG_LINE}{first_next_line}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_20, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_20, escape=False) # Check last log line message = f"{LOG_LINE}{current_line + 1}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_20, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_20, escape=False) # if only_future_events yes, logcollector should NOT detect the log lines written while it was stopped else: message = f"{LOG_LINE}{first_next_line}" # Check that the first written line is not read with pytest.raises(TimeoutError): message = f"{LOG_LINE}{first_next_line}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_10, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_10, escape=False) # Check that the last written line is not read with pytest.raises(TimeoutError): # Check last line message = f"{LOG_LINE}{current_line + 1}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_10, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_10, escape=False) # Check that if we write new data when the daemon is turned on, it is read normally current_line = logcollector.add_log_data(log_path=metadata['location'], log_line_message=LOG_LINE, size_kib=1, line_start=current_line + 1, print_line_num=True) message = f"{LOG_LINE}{current_line + 1}" - evm.check_syslog_messages(file_monitor=log_monitor, message=message, - error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, - timeout=T_10, escape=True) + evm.check_syslog_message(file_monitor=log_monitor, message=message, + error_message=GENERIC_CALLBACK_ERROR_COMMAND_MONITORING, prefix=prefix, + timeout=T_10, escape=False) diff --git a/tests/integration/test_logcollector/test_options/data/configuration/wazuh_configuration.yaml b/tests/integration/test_logcollector/test_options/data/configuration/wazuh_configuration.yaml deleted file mode 100644 index f0b89052ff..0000000000 --- a/tests/integration/test_logcollector/test_options/data/configuration/wazuh_configuration.yaml +++ /dev/null @@ -1,13 +0,0 @@ -- tags: - - test_options - apply_to_modules: - - test_options_state_interval_no_file - sections: - - section: localfile - attributes: - - name: 'testing files' - elements: - - log_format: - value: 'syslog' - - location: - value: LOCATION diff --git a/tests/integration/test_logcollector/test_options/data/configuration_template/wazuh_configuration.yaml b/tests/integration/test_logcollector/test_options/data/configuration_template/wazuh_configuration.yaml new file mode 100644 index 0000000000..2ef9b5befc --- /dev/null +++ b/tests/integration/test_logcollector/test_options/data/configuration_template/wazuh_configuration.yaml @@ -0,0 +1,13 @@ +- tags: + - test_options + apply_to_modules: + - test_options_state_interval_no_file + sections: + - section: localfile + attributes: + - name: testing files + elements: + - log_format: + value: syslog + - location: + value: LOCATION diff --git a/tests/integration/test_logcollector/test_options/test_options_state_interval.py b/tests/integration/test_logcollector/test_options/test_options_state_interval.py index 6d8e5c9f12..9aa0b8593a 100644 --- a/tests/integration/test_logcollector/test_options/test_options_state_interval.py +++ b/tests/integration/test_logcollector/test_options/test_options_state_interval.py @@ -76,6 +76,7 @@ wazuh_log_monitor = FileMonitor(LOG_FILE_PATH) state_interval_update_timeout = 10 + # Fixtures @pytest.fixture(scope="module", params=state_interval) def get_local_internal_options(request): @@ -143,16 +144,16 @@ def test_options_state_interval(get_local_internal_options, file_monitoring): error_message=f"The message: 'Invalid definition for " f"logcollector.state_interval: {interval}.' didn't appear") else: - control_service('restart') - sleep(state_interval_update_timeout) - logcollector.wait_statistics_file(timeout=interval + 5) - previous_modification_time = os.path.getmtime(LOGCOLLECTOR_STATISTICS_FILE) + control_service('restart') + sleep(state_interval_update_timeout) + logcollector.wait_statistics_file(timeout=interval + 5) + previous_modification_time = os.path.getmtime(LOGCOLLECTOR_STATISTICS_FILE) + last_modification_time = os.path.getmtime(LOGCOLLECTOR_STATISTICS_FILE) + while last_modification_time == previous_modification_time: + sleep(elapsed_time_modification) last_modification_time = os.path.getmtime(LOGCOLLECTOR_STATISTICS_FILE) - while last_modification_time == previous_modification_time: - sleep(elapsed_time_modification) - last_modification_time = os.path.getmtime(LOGCOLLECTOR_STATISTICS_FILE) - elapsed = last_modification_time - previous_modification_time - if sys.platform == 'win32': - assert interval - 30 < elapsed < interval + 30 - else: - assert interval - 1 < elapsed < interval + 1 \ No newline at end of file + elapsed = last_modification_time - previous_modification_time + if sys.platform == 'win32': + assert interval - 30 < elapsed < interval + 30 + else: + assert interval - 1 < elapsed < interval + 1 diff --git a/tests/integration/test_logcollector/test_options/test_options_state_interval_no_file.py b/tests/integration/test_logcollector/test_options/test_options_state_interval_no_file.py index e219664e99..1401b759ff 100644 --- a/tests/integration/test_logcollector/test_options/test_options_state_interval_no_file.py +++ b/tests/integration/test_logcollector/test_options/test_options_state_interval_no_file.py @@ -69,7 +69,7 @@ pytestmark = [pytest.mark.linux, pytest.mark.tier(level=1), pytest.mark.server] # Configuration -test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data', 'configuration') +test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data', 'configuration_template') configurations_path = os.path.join(test_data_path, 'wazuh_configuration.yaml') daemons_handler_configuration = {'all_daemons': True} diff --git a/tests/integration/test_remoted/test_multi_groups/test_merged_mg_file_content.py b/tests/integration/test_remoted/test_multi_groups/test_merged_mg_file_content.py index 667a8ab833..3d638798df 100644 --- a/tests/integration/test_remoted/test_multi_groups/test_merged_mg_file_content.py +++ b/tests/integration/test_remoted/test_multi_groups/test_merged_mg_file_content.py @@ -194,7 +194,7 @@ def test_merged_mg_file_content(metadata, configure_local_internal_options_modul else: raise FileNotFoundError(f"The file: {merged_mg_file} was not created.") - expected_conditions = [True, [expected_line + '\n']] if action == 'create' else [False, []] + expected_conditions = [False, [expected_line + '\n']] if action == 'create' else [False, []] assert file_exists == expected_conditions[0], f"The file was not {action}d in the multigroups directory.\n" if action == 'created': assert match_expected_line in expected_conditions[1], f"The file is not in {merged_mg_file}." diff --git a/tests/integration/test_vulnerability_detector/data/feeds/alas/custom_alas_2022_feed.json b/tests/integration/test_vulnerability_detector/data/feeds/alas/custom_alas_2022_feed.json new file mode 100644 index 0000000000..7f7a294aae --- /dev/null +++ b/tests/integration/test_vulnerability_detector/data/feeds/alas/custom_alas_2022_feed.json @@ -0,0 +1,79 @@ +{ + "advisories": { + "ALAS-2022-0000": { + "severity": "important", + "publishedDate": "Thu, 06 May 2022 19:11:00 GMT", + "lastModifiedDate": "Fri, 07 May 2022 19:53:00 GMT", + "references": [ + "https://github.com/wazuh/wazuh-qa" + ], + "vulnerabilities": [ + "CVE-000" + ], + "fixed_packages": [ + "custom-package-0-1.0.0-12.100.amzn2022.x64", + "custom-package-0-1.0.0-12.100.amzn2022.x86_64" + ] + }, + "ALAS-2022-0001": { + "severity": "important", + "publishedDate": "Thu, 06 May 2022 19:11:00 GMT", + "lastModifiedDate": "Fri, 07 May 2022 19:53:00 GMT", + "references": [ + "https://github.com/wazuh/wazuh-qa" + ], + "vulnerabilities": [ + "CVE-001" + ], + "fixed_packages": [ + "custom-package-1-1.0.0-12.100.amzn2022.x64", + "custom-package-1-1.0.0-12.100.amzn2022.x86_64" + ] + }, + "ALAS-2022-0002": { + "severity": "important", + "publishedDate": "Thu, 06 May 2022 19:11:00 GMT", + "lastModifiedDate": "Fri, 07 May 2022 19:53:00 GMT", + "references": [ + "https://github.com/wazuh/wazuh-qa" + ], + "vulnerabilities": [ + "CVE-002" + ], + "fixed_packages": [ + "custom-package-2-1.1.0-12.100.amzn2022.x64", + "custom-package-2-1.1.0-12.100.amzn2022.x86_64" + ] + }, + "ALAS-2022-0003": { + "severity": "important", + "publishedDate": "Thu, 06 May 2022 19:11:00 GMT", + "lastModifiedDate": "Fri, 07 May 2022 19:53:00 GMT", + "references": [ + "https://github.com/wazuh/wazuh-qa" + ], + "vulnerabilities": [ + "CVE-003" + ], + "fixed_packages": [ + "custom-package-3-1.1.0-12.100.amzn2022.x64", + "custom-package-3-1.1.0-12.100.amzn2022.x86_64" + ] + }, + "ALAS-2022-0004": { + "severity": "important", + "publishedDate": "Thu, 06 May 2022 19:11:00 GMT", + "lastModifiedDate": "Fri, 07 May 2022 19:53:00 GMT", + "references": [ + "https://github.com/wazuh/wazuh-qa" + ], + "vulnerabilities": [ + "CVE-004" + ], + "fixed_packages": [ + "custom-package-4-1.1.0-12.100.amzn2022.x64", + "custom-package-4-1.1.0-12.100.amzn2022.x86_64" + ] + } + } +} diff --git a/tests/integration/test_vulnerability_detector/data/feeds/cpe_helper/custom_generic_cpe_helper_one_package.json b/tests/integration/test_vulnerability_detector/data/feeds/cpe_helper/custom_generic_cpe_helper_one_package.json new file mode 100644 index 0000000000..0931fb79bc --- /dev/null +++ b/tests/integration/test_vulnerability_detector/data/feeds/cpe_helper/custom_generic_cpe_helper_one_package.json @@ -0,0 +1,38 @@ +{ + "VERSION_TAG": "VERSION_VALUE", + "FORMAT_TAG": "FORMAT_VALUE", + "UPDATE_TAG": "UPDATE_VALUE", + "DICTIONARY_TAG": [ + { + "TARGET_TAG": "TARGET_VALUE", + "SOURCE_TAG": { + "VENDOR_S_TAG": [ + "VENDOR_S_VALUE" + ], + "PRODUCT_S_TAG": [ + "PRODUCT_S_VALUE_0" + ], + "VERSION_S_TAG": ["VERSION_S_VALUE"] + }, + "TRANSLATION_TAG": { + "VENDOR_T_TAG": [ + "VENDOR_T_VALUE" + ], + "PRODUCT_T_TAG": [ + "PRODUCT_T_VALUE_0" + ], + "VERSION_T_TAG": ["VERSION_T_VALUE"] + }, + "ACTION_TAG": [ + "ACTION_VALUE_0", + "ACTION_VALUE_1" + ] + } + ], + "LICENSE_TAG": { + "TITLE_TAG": "TITLE_VALUE", + "COPYRIGHT_TAG": "COPYRIGHT_VALUE", + "DATE_TAG": "DATE_VALUE", + "TYPE_TAG" : "TYPE_VALUE" + } + } diff --git a/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor.json b/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor.json new file mode 100644 index 0000000000..1f80f08860 --- /dev/null +++ b/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor.json @@ -0,0 +1,14 @@ +[ + { + "scan": { + "id": 0, + "time": "2021-11-20T12:41:27Z" + }, + "architecture": "x86_64", + "format": "win", + "name": "custom-package-0 1.0.0", + "size": 0, + "vendor": "NULL", + "cveid": "CVE-000" + } +] diff --git a/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor_version.json b/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor_version.json new file mode 100644 index 0000000000..2f6baae6c6 --- /dev/null +++ b/tests/integration/test_vulnerability_detector/data/vulnerable_packages/custom_vulnerable_pkg_empty_vendor_version.json @@ -0,0 +1,15 @@ +[ + { + "scan": { + "id": 0, + "time": "2021-11-20T12:41:27Z" + }, + "architecture": "x86_64", + "format": "win", + "name": "custom-package-0 1.0.0", + "size": 0, + "vendor": "NULL", + "cveid": "CVE-000", + "version": "NULL" + } +] diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_fields.yaml b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_fields.yaml new file mode 100644 index 0000000000..5cb06536d2 --- /dev/null +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_fields.yaml @@ -0,0 +1,229 @@ +- name: Missing vendor field + description: Indexing CPE helper with missing vendor field + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: ^custom-package-0.* + VERSION_S_VALUE: ^custom-package-0 ([0-9]+\\.*[0-9]*\\.*[0-9]*-*[0-9]*) + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: "" + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: set_version_if_product_matches + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 + +- name: Missing vendor and version fields + description: Indexing CPE helper with missing vendor and version fields + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: ^custom-package-0.* + VERSION_S_VALUE: "" + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: ^custom-package-0 ([0-9]+\\.*[0-9]*\\.*[0-9]*-*[0-9]*) + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: set_version_if_product_matches + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 + +- name: Missing set_version_if_product_matches action field + description: Indexing CPE helper with missing set_version_if_product_matches action field + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: ^custom-package-0.* + VERSION_S_VALUE: "" + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: ^custom-package-0 ([0-9]+\\.*[0-9]*\\.*[0-9]*-*[0-9]*) + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: "" + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 + +- name: Replace_vendor instead of set_version_if_product_matches action fields + description: Indexing CPE helper with replace_vendor instead of set_version_if_product_matches action fields + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: ^custom-package-0.* + VERSION_S_VALUE: "" + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: ^custom-package-0 ([0-9]+\\.*[0-9]*\\.*[0-9]*-*[0-9]*) + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: replace_vendor + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 + +- name: Missing all source fields + description: Indexing CPE helper with missing all source fields + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: "" + VERSION_S_VALUE: "" + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: ^custom-package-0 ([0-9]+\\.*[0-9]*\\.*[0-9]*-*[0-9]*) + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: replace_vendor + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_vendor_version.yaml b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_vendor_version.yaml new file mode 100644 index 0000000000..a35763ded6 --- /dev/null +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_empty_vendor_version.yaml @@ -0,0 +1,45 @@ +- name: Missing all the source fields and version translation field + description: Indexing CPE helper with missing all the source fields and version translation field + configuration_parameters: + NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH + metadata: + system: WINDOWS_10 + wrong_field: null + missing_field: [] + tags: + VERSION_TAG: version + FORMAT_TAG: format_version + UPDATE_TAG: update_date + DICTIONARY_TAG: dictionary + TARGET_TAG: target + SOURCE_TAG: source + VENDOR_S_TAG: vendor + PRODUCT_S_TAG: product + VERSION_S_TAG: version + TRANSLATION_TAG: translation + VENDOR_T_TAG: vendor + PRODUCT_T_TAG: product + VERSION_T_TAG: version + ACTION_TAG: action + LICENSE_TAG: license + TITLE_TAG: title + COPYRIGHT_TAG: copyright + DATE_TAG: date + TYPE_TAG: type + values: + VERSION_VALUE: "1.0" + FORMAT_VALUE: "1.0" + UPDATE_VALUE: 2050-10-02T10:56Z + TARGET_VALUE: windows + VENDOR_S_VALUE: "" + PRODUCT_S_VALUE_0: "" + VERSION_S_VALUE: "" + VENDOR_T_VALUE: wazuh-mocking + PRODUCT_T_VALUE_0: custom-package-0 + VERSION_T_VALUE: "" + ACTION_VALUE_0: replace_product + ACTION_VALUE_1: replace_vendor + TITLE_VALUE: Dictionary of CPEs to analyze system vulnerabilities. + COPYRIGHT_VALUE: Copyright (C) 2015-2019, Wazuh Inc. + DATE_VALUE: March 6, 2019. + TYPE_VALUE: GPLv2 diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_missing_fields.yaml b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_missing_fields.yaml index 6b0bb35c91..78469ef842 100644 --- a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_missing_fields.yaml +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_missing_fields.yaml @@ -1,4 +1,4 @@ -- name: Indexing CPE helper with missing version field +- name: Missing version field description: Indexing CPE helper with missing version field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -54,7 +54,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing format_version field +- name: Missing format_version field description: Indexing CPE helper with missing format_version field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -110,7 +110,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing update_date field +- name: Missing update_date field description: Indexing CPE helper with missing update_date field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -166,7 +166,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing target field +- name: Missing target field description: Indexing CPE helper with missing target field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -222,7 +222,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing action field +- name: Missing action field description: Indexing CPE helper with missing action field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -278,7 +278,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing vendor field +- name: Missing vendor field description: Indexing CPE helper with missing vendor field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -334,7 +334,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with missing product field +- name: Missing product field description: Indexing CPE helper with missing product field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_tags.yaml b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_tags.yaml index 04658367af..e8ba5e42c3 100644 --- a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_tags.yaml +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_tags.yaml @@ -1,4 +1,4 @@ -- name: Indexing CPE helper with wrong source vendor fields +- name: Wrong source vendor fields description: Indexing CPE helper with wrong source vendor fields configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -53,7 +53,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong translation product fields +- name: Wrong translation product fields description: Indexing CPE helper with wrong translation product fields configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -108,7 +108,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong version field +- name: Wrong version field description: Indexing CPE helper with wrong version field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -163,7 +163,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong format_version fields +- name: Wrong format_version fields description: Indexing CPE helper with wrong format_version fields configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -218,7 +218,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong update_date fields +- name: Wrong update_date fields description: Indexing CPE helper with wrong update_date fields configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -273,7 +273,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong target field +- name: Wrong target field description: Indexing CPE helper with wrong target field configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_values.yaml b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_values.yaml index 7ed91bad3e..a0cecb6ad3 100644 --- a/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_values.yaml +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/data/test_cases/cases_cpe_indexing_wrong_values.yaml @@ -1,4 +1,4 @@ -- name: Indexing CPE helper with wrong version value +- name: Wrong version value description: Indexing CPE helper with wrong version value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -53,7 +53,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong update_date value +- name: Wrong update_date value description: Indexing CPE helper with wrong update_date value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -108,7 +108,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong target value +- name: Wrong target value description: Indexing CPE helper with wrong target value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -163,7 +163,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong format_version value +- name: Wrong format_version value description: Indexing CPE helper with wrong format_version value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -218,7 +218,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong source vendor value +- name: Wrong source vendor value description: Indexing CPE helper with wrong source vendor value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -273,7 +273,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong source product value +- name: Wrong source product value description: Indexing CPE helper with wrong source product value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH @@ -328,7 +328,7 @@ DATE_VALUE: March 6, 2019. TYPE_VALUE: GPLv2 -- name: Indexing CPE helper with wrong action value +- name: Wrong action value description: Indexing CPE helper with wrong action value configuration_parameters: NVD_JSON_PATH: CUSTOM_NVD_JSON_PATH diff --git a/tests/integration/test_vulnerability_detector/test_cpe_helper/test_cpe_helper.py b/tests/integration/test_vulnerability_detector/test_cpe_helper/test_cpe_helper.py index f9b9db8a8a..40f6a26de7 100644 --- a/tests/integration/test_vulnerability_detector/test_cpe_helper/test_cpe_helper.py +++ b/tests/integration/test_vulnerability_detector/test_cpe_helper/test_cpe_helper.py @@ -46,7 +46,6 @@ ''' import os import pytest -import json from wazuh_testing.tools.configuration import load_configuration_template, get_test_cases_data from wazuh_testing.tools.configuration import update_configuration_template @@ -73,11 +72,19 @@ t2_cases_path = os.path.join(TEST_CASES_PATH, 'cases_cpe_indexing_wrong_values.yaml') t3_configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_cpe_indexing.yaml') t3_cases_path = os.path.join(TEST_CASES_PATH, 'cases_cpe_indexing_missing_fields.yaml') +t4_configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_cpe_indexing.yaml') +t4_cases_path = os.path.join(TEST_CASES_PATH, 'cases_cpe_indexing_empty_fields.yaml') +t5_configurations_path = os.path.join(CONFIGURATIONS_PATH, 'configuration_cpe_indexing.yaml') +t5_cases_path = os.path.join(TEST_CASES_PATH, 'cases_cpe_indexing_empty_vendor_version.yaml') # Custom paths custom_nvd_json_feed_path = os.path.join(TEST_FEEDS_PATH, 'nvd', vd.CUSTOM_NVD_FEED) custom_cpe_helper_path = os.path.join(TEST_FEEDS_PATH, 'cpe_helper', vd.CUSTOM_CPE_HELPER_TEMPLATE) custom_vulnerable_packages_path = os.path.join(TEST_PACKAGES_PATH, vd.CUSTOM_VULNERABLE_PACKAGES) +custom_vulnerable_pkg_empty_vendor_path = os.path.join(TEST_PACKAGES_PATH, + vd.CUSTOM_VULNERABLE_PKG_EMPTY_VENDOR) +custom_vulnerable_pkg_empty_vendor_version_path = os.path.join(TEST_PACKAGES_PATH, + vd.CUSTOM_VULNERABLE_PKG_EMPTY_VENDOR_VERSION) # CPE indexing packages test configurations (t1) t1_configuration_parameters, t1_configuration_metadata, t1_test_case_ids = get_test_cases_data(t1_cases_path) @@ -97,6 +104,18 @@ t3_configuration_metadata) t3_systems = [metadata['system'] for metadata in t3_configuration_metadata] +# CPE indexing packages test configurations (t4) +t4_configuration_parameters, t4_configuration_metadata, t4_test_case_ids = get_test_cases_data(t4_cases_path) +t4_configurations = load_configuration_template(t4_configurations_path, t4_configuration_parameters, + t4_configuration_metadata) +t4_systems = [metadata['system'] for metadata in t4_configuration_metadata] + +# CPE indexing packages test configurations (t5) +t5_configuration_parameters, t5_configuration_metadata, t5_test_case_ids = get_test_cases_data(t5_cases_path) +t5_configurations = load_configuration_template(t5_configurations_path, t5_configuration_parameters, + t5_configuration_metadata) +t5_systems = [metadata['system'] for metadata in t5_configuration_metadata] + # Set offline custom feeds configuration t1_configurations = update_configuration_template(t1_configurations, ['CUSTOM_NVD_JSON_PATH'], [custom_nvd_json_feed_path]) @@ -104,9 +123,19 @@ [custom_nvd_json_feed_path]) t3_configurations = update_configuration_template(t3_configurations, ['CUSTOM_NVD_JSON_PATH'], [custom_nvd_json_feed_path]) +t4_configurations = update_configuration_template(t4_configurations, ['CUSTOM_NVD_JSON_PATH'], + [custom_nvd_json_feed_path]) +t5_configurations = update_configuration_template(t5_configurations, ['CUSTOM_NVD_JSON_PATH'], + [custom_nvd_json_feed_path]) # Global vars -agent_packages = read_json_file(custom_vulnerable_packages_path) +t1_agent_packages = [read_json_file(custom_vulnerable_packages_path) for metadata in t1_configuration_metadata] +t2_agent_packages = [read_json_file(custom_vulnerable_packages_path) for metadata in t2_configuration_metadata] +t3_agent_packages = [read_json_file(custom_vulnerable_packages_path) for metadata in t3_configuration_metadata] +t4_agent_packages = [read_json_file(custom_vulnerable_pkg_empty_vendor_path) + for metadata in t4_configuration_metadata] +t5_agent_packages = [read_json_file(custom_vulnerable_pkg_empty_vendor_version_path) + for metadata in t5_configuration_metadata] def replace_cpe_json_fields(tags=None, values=None): @@ -169,22 +198,37 @@ def remove_cpe_json_fields(tags=None): @pytest.fixture(scope='function') -def prepare_environment(request, metadata, agent_system, mock_agent_with_custom_system): +def prepare_environment(request, metadata, agent_system, agent_packages, mock_agent_with_custom_system): """Prepare the environment with a mocked agent, vulnerable packages and a custom cpe_helper. - Mock an agent with a specified system. - Insert mocked vulnerables packages. - Update packages sync status. - Copy the custom CPE helper to the dictionaries folder. + - Force full scan. Args: metadata (dict): Test case metadata. agent_system (str): System to set to the mocked agent. + agent_packages (list): List of vulnerable packages mock_agent_with_custom_system (fixture): Mock an agent with a custom system. """ for package in agent_packages: - agent_db.insert_package(name=package['name'], version=package['version'], source=package['name'], - vendor=package['vendor'], agent_id=mock_agent_with_custom_system) + try: + version = package['version'] + except KeyError: + version = '' + try: + format = package['format'] + except KeyError: + format = 'rpm' + try: + architecture = package['architecture'] + except KeyError: + architecture = 'x64' + agent_db.insert_package(name=package['name'], format=format, architecture=architecture, + agent_id=mock_agent_with_custom_system, vendor=package['vendor'], + version=version, source=package['name']) # Sync packages info agent_db.update_sync_info(agent_id=mock_agent_with_custom_system, component="syscollector-packages") @@ -208,9 +252,11 @@ def prepare_environment(request, metadata, agent_system, mock_agent_with_custom_ write_json_file(CPE_HELPER_PATH, cpe_helper_backup_data) -@pytest.mark.parametrize('configuration, metadata, agent_system', - zip(t1_configurations, t1_configuration_metadata, t1_systems), ids=t1_test_case_ids) -def test_cpe_indexing_wrong_tags(configuration, metadata, agent_system, set_wazuh_configuration_vdt, +@pytest.mark.tier(level=2) +@pytest.mark.parametrize('configuration, metadata, agent_system, agent_packages', + zip(t1_configurations, t1_configuration_metadata, t1_systems, t1_agent_packages), + ids=t1_test_case_ids) +def test_cpe_indexing_wrong_tags(configuration, metadata, agent_system, agent_packages, set_wazuh_configuration_vdt, truncate_monitored_files, clean_cve_tables_func, prepare_environment, restart_modulesd_function): ''' @@ -232,7 +278,7 @@ def test_cpe_indexing_wrong_tags(configuration, metadata, agent_system, set_wazu - Restore initial configuration, both ossec.conf and local_internal_options.conf. - Restore the original cpe_helper.json - wazuh_min_version: 4.4.0 + wazuh_min_version: 4.5.0 tier: 2 @@ -246,6 +292,9 @@ def test_cpe_indexing_wrong_tags(configuration, metadata, agent_system, set_wazu - agent_system: type: str brief: System to set to the mocked agent. + - agent_packages + type: list + brief: List of vulnerable packages. - set_wazuh_configuration_vdt: type: fixture brief: Set the wazuh configuration according to the configuration data. @@ -293,9 +342,11 @@ def test_cpe_indexing_wrong_tags(configuration, metadata, agent_system, set_wazu raise AttributeError('Unexpected log') -@pytest.mark.parametrize('configuration, metadata, agent_system', - zip(t2_configurations, t2_configuration_metadata, t2_systems), ids=t2_test_case_ids) -def test_cpe_indexing_wrong_values(configuration, metadata, agent_system, set_wazuh_configuration_vdt, +@pytest.mark.tier(level=2) +@pytest.mark.parametrize('configuration, metadata, agent_system, agent_packages', + zip(t2_configurations, t2_configuration_metadata, t2_systems, t2_agent_packages), + ids=t2_test_case_ids) +def test_cpe_indexing_wrong_values(configuration, metadata, agent_system, agent_packages, set_wazuh_configuration_vdt, truncate_monitored_files, clean_cve_tables_func, prepare_environment, restart_modulesd_function): ''' @@ -317,7 +368,7 @@ def test_cpe_indexing_wrong_values(configuration, metadata, agent_system, set_wa - Restore initial configuration, both ossec.conf and local_internal_options.conf. - Restore the original cpe_helper.json - wazuh_min_version: 4.4.0 + wazuh_min_version: 4.5.0 tier: 2 @@ -331,6 +382,9 @@ def test_cpe_indexing_wrong_values(configuration, metadata, agent_system, set_wa - agent_system: type: str brief: System to set to the mocked agent. + - agent_packages + type: list + brief: List of vulnerable packages. - set_wazuh_configuration_vdt: type: fixture brief: Set the wazuh configuration according to the configuration data. @@ -378,9 +432,11 @@ def test_cpe_indexing_wrong_values(configuration, metadata, agent_system, set_wa raise AttributeError('Unexpected log') -@pytest.mark.parametrize('configuration, metadata, agent_system', - zip(t3_configurations, t3_configuration_metadata, t3_systems), ids=t3_test_case_ids) -def test_cpe_indexing_missing_field(configuration, metadata, agent_system, set_wazuh_configuration_vdt, +@pytest.mark.tier(level=2) +@pytest.mark.parametrize('configuration, metadata, agent_system, agent_packages', + zip(t3_configurations, t3_configuration_metadata, t3_systems, t3_agent_packages), + ids=t3_test_case_ids) +def test_cpe_indexing_missing_field(configuration, metadata, agent_system, agent_packages, set_wazuh_configuration_vdt, truncate_monitored_files, clean_cve_tables_func, prepare_environment, restart_modulesd_function): ''' @@ -402,7 +458,7 @@ def test_cpe_indexing_missing_field(configuration, metadata, agent_system, set_w - Restore initial configuration, both ossec.conf and local_internal_options.conf. - Restore the original cpe_helper.json - wazuh_min_version: 4.4.0 + wazuh_min_version: 4.5.0 tier: 2 @@ -416,6 +472,9 @@ def test_cpe_indexing_missing_field(configuration, metadata, agent_system, set_w - agent_system: type: str brief: System to set to the mocked agent. + - agent_packages + type: list + brief: List of vulnerable packages. - set_wazuh_configuration_vdt: type: fixture brief: Set the wazuh configuration according to the configuration data. @@ -460,3 +519,156 @@ def test_cpe_indexing_missing_field(configuration, metadata, agent_system, set_w raise AttributeError('Unexpected log') elif expected_result == 'error_inserting_package': evm.check_error_inserting_package(agent_id=prepare_environment) + + +@pytest.mark.tier(level=1) +@pytest.mark.parametrize('configuration, metadata, agent_system, agent_packages', + zip(t4_configurations, t4_configuration_metadata, t4_systems, t4_agent_packages), + ids=t4_test_case_ids) +def test_cpe_indexing_empty_fields(configuration, metadata, agent_system, agent_packages, set_wazuh_configuration_vdt, + truncate_monitored_files, clean_cve_tables_func, prepare_environment, + restart_modulesd_function): + ''' + description: Check if the packages are indexed in the database by checking the respective log in the ossec.log file, + and if the alert of the vulnerable package comes out when some tag are empty. + + test_phases: + - setup: + - Load Wazuh light configuration, with custom feeds. + - Apply ossec.conf configuration changes according to the configuration template and use case. + - Apply custom settings in local_internal_options.conf. + - Mock an agent with Windows system and vulnerable packages. + - Backup the original cpe_helper.json and copy a custom CPE helper with new tags and values. + - Truncate wazuh logs. + - Restart wazuh-modulesd daemon to apply configuration changes. + - test: + - Check in the log and alert for specific information. + - teardown: + - Truncate wazuh logs. + - Restore initial configuration, both ossec.conf and local_internal_options.conf. + - Restore the original cpe_helper.json + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Wazuh configuration data. Needed for set_wazuh_configuration fixture. + - metadata: + type: dict + brief: Wazuh configuration metadata. + - agent_system: + type: str + brief: System to set to the mocked agent. + - agent_packages + type: list + brief: List of vulnerable packages. + - set_wazuh_configuration_vdt: + type: fixture + brief: Set the wazuh configuration according to the configuration data. + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - clean_cve_tables_func: + type: fixture + brief: Clean all CVE tables. + - prepare_environment: + type: fixture + brief: Setup the initial test state. + - restart_modulesd_function: + type: fixture + brief: Restart the wazuh-modulesd daemon. + + assertions: + - Check for a specific log and alert. + + input_description: + - The `configuration_cpe_indexing.yaml` file provides the module configuration for this test. + - The `cases_cpe_indexing_missing_fields.yaml` file provides the test cases. + + expected_output: + - r"The CPE .*a:{package_vendor}:{package_name}.* from the agent '{agent_id}' was indexed" + - fr".*"agent":."id":"{agent_id}".*{cve} affects {package}', prefix='.*" + ''' + for package in agent_packages: + evm.check_cpe_helper_packages_indexed(package_name=metadata['values']['PRODUCT_T_VALUE_0'], + package_vendor=metadata['values']['VENDOR_T_VALUE'], + agent_id=prepare_environment, timeout=vd.T_20) + + evm.check_vulnerability_affects_alert(agent_id=prepare_environment, + package=metadata['values']['PRODUCT_T_VALUE_0'], cve=package['cveid']) + + +@pytest.mark.tier(level=1) +@pytest.mark.parametrize('configuration, metadata, agent_system, agent_packages', + zip(t5_configurations, t5_configuration_metadata, t5_systems, t5_agent_packages), + ids=t5_test_case_ids) +def test_cpe_indexing_empty_vendor_version(configuration, metadata, agent_system, agent_packages, + set_wazuh_configuration_vdt, truncate_monitored_files, + clean_cve_tables_func, prepare_environment, restart_modulesd_function): + ''' + description: Check that when vendor and version tags are empty, and the action tag is not the correct to + extract the version field, the package cannot be indexed. + + test_phases: + - setup: + - Load Wazuh light configuration, with custom feeds. + - Apply ossec.conf configuration changes according to the configuration template and use case. + - Apply custom settings in local_internal_options.conf. + - Mock an agent with Windows system and vulnerable packages. + - Backup the original cpe_helper.json and copy a custom CPE helper with new tags and values. + - Truncate wazuh logs. + - Restart wazuh-modulesd daemon to apply configuration changes. + - test: + - Check in the log and alert for specific information. + - teardown: + - Truncate wazuh logs. + - Restore initial configuration, both ossec.conf and local_internal_options.conf. + - Restore the original cpe_helper.json + + wazuh_min_version: 4.5.0 + + tier: 1 + + parameters: + - configuration: + type: dict + brief: Wazuh configuration data. Needed for set_wazuh_configuration fixture. + - metadata: + type: dict + brief: Wazuh configuration metadata. + - agent_system: + type: str + brief: System to set to the mocked agent. + - agent_packages + type: list + brief: List of vulnerable packages. + - set_wazuh_configuration_vdt: + type: fixture + brief: Set the wazuh configuration according to the configuration data. + - truncate_monitored_files: + type: fixture + brief: Truncate all the log files and json alerts files before and after the test execution. + - clean_cve_tables_func: + type: fixture + brief: Clean all CVE tables. + - prepare_environment: + type: fixture + brief: Setup the initial test state. + - restart_modulesd_function: + type: fixture + brief: Restart the wazuh-modulesd daemon. + + assertions: + - Check for a specific log and alert. + + input_description: + - The `configuration_cpe_indexing.yaml` file provides the module configuration for this test. + - The `cases_cpe_indexing_missing_vendor_version.yaml` file provides the test cases. + + expected_output: + - fr"DEBUG: .* Couldn't get the version of the CPE for the {package_name} package." + ''' + evm.check_version_log(package_name=metadata['values']['PRODUCT_T_VALUE_0']) diff --git a/tests/integration/test_vulnerability_detector/test_feeds/data/test_cases/cases_download_feeds.yaml b/tests/integration/test_vulnerability_detector/test_feeds/data/test_cases/cases_download_feeds.yaml index 4c05e598eb..094dc2cd5e 100644 --- a/tests/integration/test_vulnerability_detector/test_feeds/data/test_cases/cases_download_feeds.yaml +++ b/tests/integration/test_vulnerability_detector/test_feeds/data/test_cases/cases_download_feeds.yaml @@ -156,8 +156,19 @@ download_timeout: 120 update_treshold_weeks: 3 -- name: NVD - description: National Vulnerability Database provider +- name: 'ALAS-2022' + description: 'Amazon Linux provider' + configuration_parameters: + PROVIDER: 'alas' + OS: 'amazon-linux-2022' + metadata: + provider_name: 'Amazon Linux 2022' + provider_os: 'Amazon-Linux-2022' + download_timeout: 120 + update_treshold_weeks: 3 + +- name: 'NVD' + description: 'National Vulnerability Database provider' configuration_parameters: PROVIDER: nvd OS: '' diff --git a/tests/integration/test_vulnerability_detector/test_feeds/test_download_feeds.py b/tests/integration/test_vulnerability_detector/test_feeds/test_download_feeds.py index ebeff4b057..f0a437bc0e 100644 --- a/tests/integration/test_vulnerability_detector/test_feeds/test_download_feeds.py +++ b/tests/integration/test_vulnerability_detector/test_feeds/test_download_feeds.py @@ -29,6 +29,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 @@ -132,7 +133,7 @@ def test_download_feeds(configuration, metadata, set_wazuh_configuration_vdt, tr ''' if metadata['provider_os'] == 'BIONIC': pytest.xfail(reason='Ubuntu Bionic feed parsing error - Wazuh/Wazuh Issue #13556') - + # Check that the feed update has started evm.check_provider_database_update_start_log(metadata['provider_name']) # Check that the feed has been updated successfully @@ -144,7 +145,6 @@ def test_download_feeds(configuration, metadata, set_wazuh_configuration_vdt, tr evm.check_provider_database_update_finish_log(provider_name=metadata['provider_json_name'], timeout=metadata['download_timeout']) - # Check that the timestamp of the feed metadata does not exceed the established threshold limit. if metadata['update_treshold_weeks'] != 'None': assert vd.feed_is_recently_updated(provider_name=metadata['provider_name'], provider_os=metadata['provider_os'], diff --git a/tests/integration/test_vulnerability_detector/test_feeds/test_duplicate_feeds.py b/tests/integration/test_vulnerability_detector/test_feeds/test_duplicate_feeds.py index 6067fbe53d..c3d71b2256 100644 --- a/tests/integration/test_vulnerability_detector/test_feeds/test_duplicate_feeds.py +++ b/tests/integration/test_vulnerability_detector/test_feeds/test_duplicate_feeds.py @@ -29,6 +29,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_feeds/test_import_invalid_feed_type.py b/tests/integration/test_vulnerability_detector/test_feeds/test_import_invalid_feed_type.py index 7ab19524f4..86c71a7e6a 100644 --- a/tests/integration/test_vulnerability_detector/test_feeds/test_import_invalid_feed_type.py +++ b/tests/integration/test_vulnerability_detector/test_feeds/test_import_invalid_feed_type.py @@ -27,6 +27,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_disabled.yaml b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_disabled.yaml index 7864f98fbf..fa47be6f6d 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_disabled.yaml +++ b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_disabled.yaml @@ -16,8 +16,17 @@ metadata: provider_name: Amazon Linux 2 -- name: Ubuntu Focal - description: Test disabled Ubuntu Focal +- name: 'Amazon Linux 2022' + description: 'Test disabled Amazon Linux 2022' + configuration_parameters: + ENABLED: 'no' + PROVIDER: 'alas' + OS: 'amazon-linux-2022' + metadata: + provider_name: 'Amazon Linux 2022' + +- name: 'Ubuntu Focal' + description: 'Test disabled Ubuntu Focal' configuration_parameters: ENABLED: 'no' PROVIDER: canonical diff --git a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_enabled.yaml b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_enabled.yaml index 4e6b65cd2c..0f5ce73a6b 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_enabled.yaml +++ b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_enabled.yaml @@ -16,8 +16,17 @@ metadata: provider_name: Amazon Linux 2 -- name: Ubuntu Focal - description: Test enabled Ubuntu Focal +- name: 'Amazon Linux 2022' + description: 'Test enabled Amazon Linux 2022' + configuration_parameters: + ENABLED: 'yes' + PROVIDER: 'alas' + OS: 'amazon-linux-2022' + metadata: + provider_name: 'Amazon Linux 2022' + +- name: 'Ubuntu Focal' + description: 'Test enabled Ubuntu Focal' configuration_parameters: ENABLED: 'yes' PROVIDER: canonical diff --git a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_os.yaml b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_os.yaml index b31b2ae2b7..aa586a9d9f 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_os.yaml +++ b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_os.yaml @@ -16,8 +16,17 @@ provider_name: Amazon Linux 2 os: amazon-linux-2 -- name: Ubuntu Trusty - description: Ubuntu Trusty provider +- name: 'Amazon Linux 2022' + description: 'Amazon Linux 2022 provider' + configuration_parameters: + PROVIDER: 'alas' + OS: 'amazon-linux-2022' + metadata: + provider_name: 'Amazon Linux 2022' + os: 'amazon-linux-2022' + +- name: 'Ubuntu Trusty' + description: 'Ubuntu Trusty provider' configuration_parameters: PROVIDER: canonical OS: trusty diff --git a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_update_interval.yaml b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_update_interval.yaml index f55e243787..84529339b4 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_update_interval.yaml +++ b/tests/integration/test_vulnerability_detector/test_providers/data/test_cases/cases_update_interval.yaml @@ -1,11 +1,11 @@ -- name: 'Amazon_Linux' - description: 'Test update interval 5s Amazon Linux' +- name: 'Amazon Linux 2022' + description: 'Test update interval 5s Amazon Linux 2022' configuration_parameters: PROVIDER: 'alas' - OS: amazon-linux + OS: amazon-linux-2022 UPDATE_INTERVAL: '5s' metadata: - provider_name: 'Amazon Linux 1' + provider_name: 'Amazon Linux 2022' update_interval: '5s' - name: 'RedHat' diff --git a/tests/integration/test_vulnerability_detector/test_providers/test_enabled.py b/tests/integration/test_vulnerability_detector/test_providers/test_enabled.py index 8a9868462c..8eccfd5930 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/test_enabled.py +++ b/tests/integration/test_vulnerability_detector/test_providers/test_enabled.py @@ -28,6 +28,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_providers/test_os.py b/tests/integration/test_vulnerability_detector/test_providers/test_os.py index 219faa5998..af86ea3a70 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/test_os.py +++ b/tests/integration/test_vulnerability_detector/test_providers/test_os.py @@ -29,6 +29,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_providers/test_update_interval.py b/tests/integration/test_vulnerability_detector/test_providers/test_update_interval.py index c7a485faa4..2ada88d1d6 100644 --- a/tests/integration/test_vulnerability_detector/test_providers/test_update_interval.py +++ b/tests/integration/test_vulnerability_detector/test_providers/test_update_interval.py @@ -29,6 +29,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_and_nvd_vulnerabilities.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_and_nvd_vulnerabilities.yaml index cf59cbc6fe..45c2c3bfb7 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_and_nvd_vulnerabilities.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_and_nvd_vulnerabilities.yaml @@ -165,8 +165,8 @@ value: 'yes' - os: attributes: - - path: CUSTOM_ALAS2_JSON_FEED - value: 'amazon-linux-2' + - path: CUSTOM_ALAS_2022_JSON_FEED + value: 'amazon-linux-2022' - provider: attributes: - name: 'nvd' diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_vulnerabilities.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_vulnerabilities.yaml index 8f01f7e34b..c064db94f5 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_vulnerabilities.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_provider_vulnerabilities.yaml @@ -165,8 +165,8 @@ value: 'yes' - os: attributes: - - path: CUSTOM_ALAS2_JSON_FEED - value: 'amazon-linux-2' + - path: CUSTOM_ALAS_2022_JSON_FEED + value: 'amazon-linux-2022' - provider: attributes: - name: 'nvd' diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_vulnerability_removal.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_vulnerability_removal.yaml index 759b1c31c7..fb675f8093 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_vulnerability_removal.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/configuration_template/configuration_scan_vulnerability_removal.yaml @@ -60,6 +60,66 @@ - disabled: value: 'no' +# Alas Configuration +- sections: + - section: vulnerability-detector + elements: + - enabled: + value: 'yes' + - interval: + value: '5s' + - min_full_scan_interval: + value: '5s' + - run_on_start: + value: 'yes' + - provider: + attributes: + - name: 'alas' + elements: + - enabled: + value: 'yes' + - os: + attributes: + - path: CUSTOM_ALAS_2022_JSON_FEED + value: 'amazon-linux-2022' + - provider: + attributes: + - name: 'nvd' + elements: + - enabled: + value: 'yes' + - path: + value: CUSTOM_NVD_JSON_FEED + - update_interval: + value: '10s' + + - section: sca + elements: + - enabled: + value: 'no' + + - section: rootcheck + elements: + - disabled: + value: 'yes' + + - section: syscheck + elements: + - disabled: + value: 'yes' + + - section: wodle + attributes: + - name: 'syscollector' + elements: + - disabled: + value: 'yes' + + - section: auth + elements: + - disabled: + value: 'no' + # SUSE configuration - sections: - section: vulnerability-detector diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_and_nvd_vulnerabilities.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_and_nvd_vulnerabilities.yaml index 2e4d2cd599..2ca86f4af9 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_and_nvd_vulnerabilities.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_and_nvd_vulnerabilities.yaml @@ -41,11 +41,11 @@ configuration_parameters: null metadata: provider_name: 'alas' - system: 'ALAS2' - json_feed: 'custom_alas2_feed.json' + system: 'ALAS_2022' + json_feed: 'custom_alas_2022_feed.json' oval_feed: null nvd_feed: 'custom_nvd_feed.json' - json_feed_tag: CUSTOM_ALAS2_JSON_FEED + json_feed_tag: CUSTOM_ALAS_2022_JSON_FEED nvd_feed_tag: CUSTOM_NVD_JSON_FEED - name: 'Arch' diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_vulnerabilities.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_vulnerabilities.yaml index f838137268..0e2b7c7e0c 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_vulnerabilities.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_provider_vulnerabilities.yaml @@ -41,11 +41,11 @@ configuration_parameters: null metadata: provider_name: 'alas' - system: 'ALAS2' - json_feed: 'custom_alas2_feed.json' + system: 'ALAS_2022' + json_feed: 'custom_alas_2022_feed.json' oval_feed: null nvd_feed: 'custom_nvd_alternative_feed.json' - json_feed_tag: CUSTOM_ALAS2_JSON_FEED + json_feed_tag: CUSTOM_ALAS_2022_JSON_FEED nvd_feed_tag: CUSTOM_NVD_JSON_FEED - name: 'Arch' diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_vulnerability_removal.yaml b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_vulnerability_removal.yaml index efd37e2013..886b6ea83d 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_vulnerability_removal.yaml +++ b/tests/integration/test_vulnerability_detector/test_scan_results/data/test_cases/cases_scan_vulnerability_removal.yaml @@ -10,6 +10,24 @@ oval_feed_tag: CUSTOM_REDHAT_OVAL_FEED json_feed_tag: CUSTOM_REDHAT_JSON_FEED nvd_feed_tag: CUSTOM_NVD_JSON_FEED + test_package_version: '1.0.0' + test_package_version_not_vulnerable: '2.1.0' + test_package_0_name: 'custom-package-0' + test_package_1_name: 'custom-package-1' + test_package_0_cve: 'CVE-000' + test_package_1_cve: 'CVE-001' + +- name: 'Alert vulnerability removal - ALAS 2022' + description: 'Alert when a package is removed from the database' + configuration_parameters: null + metadata: + provider_name: 'alas' + system: 'ALAS_2022' + json_feed: 'custom_alas_2022_feed.json' + oval_feed: null + nvd_feed: 'custom_nvd_feed.json' + json_feed_tag: CUSTOM_ALAS_2022_JSON_FEED + nvd_feed_tag: CUSTOM_NVD_JSON_FEED test_package_vendor: 'WazuhIntegrationTests' test_package_version: '1.0.0' test_package_version_not_vulnerable: '2.1.0' diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py index 940011cf17..43dbe2fa73 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py +++ b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_and_nvd_vulnerabilities.py @@ -1,5 +1,5 @@ ''' -copyright: Copyright (C) 2015-2021, Wazuh Inc. +copyright: Copyright (C) 2015-2022, Wazuh Inc. Created by Wazuh, Inc. . @@ -27,6 +27,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py index e7d3d9d6ee..b2bfd8bc33 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py +++ b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_provider_vulnerabilities.py @@ -1,5 +1,5 @@ ''' -copyright: Copyright (C) 2015-2021, Wazuh Inc. +copyright: Copyright (C) 2015-2022, Wazuh Inc. Created by Wazuh, Inc. . @@ -27,6 +27,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 diff --git a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py index b4208761bc..145a1059cc 100644 --- a/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py +++ b/tests/integration/test_vulnerability_detector/test_scan_results/test_scan_vulnerability_removal.py @@ -1,5 +1,5 @@ ''' -copyright: Copyright (C) 2015-2021, Wazuh Inc. +copyright: Copyright (C) 2015-2022, Wazuh Inc. Created by Wazuh, Inc. . @@ -29,6 +29,7 @@ os_version: - Arch Linux + - Amazon Linux 2022 - Amazon Linux 2 - Amazon Linux 1 - CentOS 8 @@ -131,7 +132,7 @@ def test_vulnerability_removal_update_package(configuration, metadata, agent_sys brief: Clean all the vulnerabilities tables before and after running the test. - prepare_full_scan_with_vuln_packages_and_custom_system: type: fixture - brief: Insert vulnerable package to a custom mocked agent and finally clean the database. + brief: Insert vulnerable packages to an agent with a custom system and finally clean the database. - setup_log_monitor: type: fixture brief: Create the log monitor. @@ -224,7 +225,7 @@ def test_vulnerability_removal_delete_package(configuration, metadata, agent_sys brief: Clean all the vulnerabilities tables before and after running the test. - prepare_full_scan_with_vuln_packages_and_custom_system: type: fixture - brief: Insert vulnerable package to a custom mocked agent and finally clean the database. + brief: Insert vulnerable packages to an agent with a custom system and finally clean the database. - setup_log_monitor: type: fixture brief: Create the log monitor. diff --git a/tests/integration/test_wazuh_db/test_agent_database_version.py b/tests/integration/test_wazuh_db/test_agent_database_version.py index 41d773ec0c..0ccb404960 100644 --- a/tests/integration/test_wazuh_db/test_agent_database_version.py +++ b/tests/integration/test_wazuh_db/test_agent_database_version.py @@ -9,7 +9,7 @@ pytestmark = [TIER0, LINUX, SERVER] # Variables -expected_database_version = '10' +expected_database_version = '11' # Fixtures @@ -34,7 +34,7 @@ def test_agent_database_version(restart_wazuh_daemon, remove_agents): - Check that the manager database version is the expected one. - Check that the agent database version is the expected one. - wazuh_min_version: 4.4.0 + wazuh_min_version: 4.6.0 parameters: - restart_wazuh_daemon: @@ -45,7 +45,7 @@ def test_agent_database_version(restart_wazuh_daemon, remove_agents): - Verify that database version is the expected one. expected_output: - - Database version: 10 + - Database version: 11 tags: - wazuh_db diff --git a/tests/performance/test_cluster/test_cluster_performance/data/10w_50000a_thresholds.yaml b/tests/performance/test_cluster/test_cluster_performance/data/10w_50000a_thresholds.yaml index 63f67317ee..4a18f03e5e 100644 --- a/tests/performance/test_cluster/test_cluster_performance/data/10w_50000a_thresholds.yaml +++ b/tests/performance/test_cluster/test_cluster_performance/data/10w_50000a_thresholds.yaml @@ -98,6 +98,6 @@ resources: mean: 197162 # (197 MB) reg_cof: 209.6 workers: - max: 201480 # (201 MB) + max: 256000 # (250 MB) mean: 145000 # (145 MB) reg_cof: 260 diff --git a/tests/reliability/test_cluster/test_cluster_logs/test_cluster_connection/test_cluster_connection.py b/tests/reliability/test_cluster/test_cluster_logs/test_cluster_connection/test_cluster_connection.py index 0ad545db27..9a80dd2d6b 100644 --- a/tests/reliability/test_cluster/test_cluster_logs/test_cluster_connection/test_cluster_connection.py +++ b/tests/reliability/test_cluster/test_cluster_logs/test_cluster_connection/test_cluster_connection.py @@ -33,7 +33,7 @@ def test_cluster_connection(artifacts_path): with open(log_file) as f: s = mmap(f.fileno(), 0, access=ACCESS_READ) # Search first successful connection message. - conn = re.search(rb'^.*Sucessfully connected to master.*$', s, flags=re.MULTILINE) + conn = re.search(rb'^.*Successfully connected to master.*$', s, flags=re.MULTILINE) if not conn: pytest.fail(f'Could not find "Sucessfully connected to master" message in the ' f'{node_name.search(log_file)[1]}') diff --git a/tests/reliability/test_cluster/test_cluster_logs/test_cluster_worker_logs_order/data/Integrity_check.yml b/tests/reliability/test_cluster/test_cluster_logs/test_cluster_worker_logs_order/data/Integrity_check.yml index 215c0ad47b..6a79d7aa5f 100644 --- a/tests/reliability/test_cluster/test_cluster_logs/test_cluster_worker_logs_order/data/Integrity_check.yml +++ b/tests/reliability/test_cluster/test_cluster_logs/test_cluster_worker_logs_order/data/Integrity_check.yml @@ -15,10 +15,10 @@ tag: "Compressing 'files_metadata.json'.*" - log_id: log3 parent: log2 - tag: 'Sending zip file to master.*' + tag: 'Sending zip file.*' - log_id: log4 parent: log3 - tag: 'Zip file sent to master.' + tag: 'Zip file sent.*' - log_id: log5 parent: log4 tag: 'Finished in .*' diff --git a/tests/system/test_active_response/conftest.py b/tests/system/test_active_response/conftest.py new file mode 100644 index 0000000000..14bebdd79d --- /dev/null +++ b/tests/system/test_active_response/conftest.py @@ -0,0 +1,114 @@ +# Copyright (C) 2015-2022, Wazuh Inc. +# Created by Wazuh, Inc. . +# This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 +import os +import ansible_runner +import pytest + + +suite_path = os.path.dirname(os.path.realpath(__file__)) + + +@pytest.fixture(scope='function') +def configure_environment(request): + """Fixture to configure environment. + + Execute the configuration playbooks declared in the test to configure the environment. + + Args: + request (fixture): Provide information on the executing test function. + """ + inventory_playbook = request.config.getoption('--inventory-path') + roles_path = request.config.getoption('--roles-path') + + if not inventory_playbook: + raise ValueError('Inventory not specified') + + # For each configuration playbook previously declared in the test, get the complete path and run it + for playbook in getattr(request.module, 'configuration_playbooks'): + configuration_playbook_path = os.path.join(getattr(request.module, 'test_data_path'), 'playbooks', playbook) + parameters = {'playbook': configuration_playbook_path, + 'inventory': inventory_playbook, + 'envvars': {'ANSIBLE_ROLES_PATH': roles_path}} + + # Check if the module has extra variables to pass to the playbook + configuration_extra_vars = getattr(request.module, 'configuration_extra_vars', None) + if configuration_extra_vars is not None: + parameters.update({'extravars': configuration_extra_vars}) + + ansible_runner.run(**parameters) + + yield + + teardown_playbooks = getattr(request.module, 'teardown_playbooks') + + # Execute each playbook for the teardown + if teardown_playbooks is not None: + for playbook in teardown_playbooks: + teardown_playbook_path = os.path.join(getattr(request.module, 'test_data_path'), 'playbooks', playbook) + + parameters = {'playbook': teardown_playbook_path, + 'inventory': inventory_playbook, + 'envvars': {'ANSIBLE_ROLES_PATH': roles_path}} + + # Check if the module has extra variables to pass to the playbook + configuration_extra_vars = getattr(request.module, 'configuration_extra_vars', None) + if configuration_extra_vars is not None: + parameters.update({'extravars': configuration_extra_vars}) + + ansible_runner.run(**parameters) + + +@pytest.fixture(scope='function') +def generate_events(request, metadata): + """Fixture to generate events. + + Execute the playbooks declared in the test to generate events. + + Args: + request (fixture): Provide information on the executing test function. + metadata (dict): Dictionary with test case metadata. + """ + inventory_playbook = request.config.getoption('--inventory-path') + roles_path = request.config.getoption('--roles-path') + + if not inventory_playbook: + raise ValueError('Inventory not specified') + + # For each event generation playbook previously declared in the test, obtain the complete path and execute it. + for playbook in getattr(request.module, 'events_playbooks'): + events_playbook_path = os.path.join(getattr(request.module, 'test_data_path'), 'playbooks', playbook) + + parameters = {'playbook': events_playbook_path, + 'inventory': inventory_playbook, + 'envvars': {'ANSIBLE_ROLES_PATH': roles_path}} + # Check if the test case has extra variables to pass to the playbook and add them to the parameters in that case + if 'extra_vars' in metadata: + parameters.update({'extravars': metadata['extra_vars']}) + + ansible_runner.run(**parameters) + + +def pytest_addoption(parser): + """Method to add some options to launch tests. + + Args: + parser (argparse.ArgumentParser): Parser object to add the options. + """ + parser.addoption( + '--inventory-path', + action='store', + metavar='INVENTORY_PATH', + default=None, + type=str, + help='Inventory path', + ) + + parser.addoption( + '--roles-path', + action='store', + metavar='ROLES_PATH', + default=os.path.join(suite_path, 'data', 'ansible_roles'), + type=str, + help='Ansible roles path.', + ) diff --git a/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/get_alert_json.yaml b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/get_alert_json.yaml new file mode 100644 index 0000000000..13325cac00 --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/get_alert_json.yaml @@ -0,0 +1,6 @@ +- name: Get alerts.json + fetch: + src: /var/ossec/logs/alerts/alerts.json + dest: /tmp/ + flat: true + become: true diff --git a/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/search_alert.yaml b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/search_alert.yaml new file mode 100644 index 0000000000..d4369a1d0d --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/search_alert.yaml @@ -0,0 +1,6 @@ +- name: Search alert in alerts log + become: true + wait_for: + path: /var/ossec/logs/alerts/alerts.json + search_regex: "{{ custom_regex }}" + timeout: "{{ timeout }}" diff --git a/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/truncate_alert_json.yaml b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/truncate_alert_json.yaml new file mode 100644 index 0000000000..e7d5b2b3c9 --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_alerts/tasks/truncate_alert_json.yaml @@ -0,0 +1,3 @@ +- name: Truncate file + shell: echo "" > /var/ossec/logs/alerts/alerts.json + become: true diff --git a/tests/system/test_active_response/data/ansible_roles/manage_logs/tasks/get_log_file.yaml b/tests/system/test_active_response/data/ansible_roles/manage_logs/tasks/get_log_file.yaml new file mode 100644 index 0000000000..fb188583b7 --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_logs/tasks/get_log_file.yaml @@ -0,0 +1,6 @@ +- name: Get ossec.log + fetch: + src: "{{ source_path }}" + dest: /tmp/{{ dest_path }} + flat: true + validate_checksum: false diff --git a/tests/system/test_active_response/data/ansible_roles/manage_wazuh/tasks/restart_wazuh.yaml b/tests/system/test_active_response/data/ansible_roles/manage_wazuh/tasks/restart_wazuh.yaml new file mode 100644 index 0000000000..87e7462201 --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_wazuh/tasks/restart_wazuh.yaml @@ -0,0 +1,33 @@ +# REQUIRED VARIABLES +# ------------------- +# +# GENERIC: +# - (String) os: Target operating system + +- name: Get installation type + become: true + shell: /var/ossec/bin/wazuh-control info + register: wazuh_info + when: os == 'linux' + +- name: Restart manager service on linux + become: true + service: + name: wazuh-manager + state: restarted + when: (os == 'linux' and 'server' in wazuh_info.stdout) + +- name: Restart agent service on linux + become: true + service: + name: wazuh-agent + state: restarted + when: (os == 'linux' and 'agent' in wazuh_info.stdout) + +- name: Restart wazuh on Windows + win_shell: | + net stop Wazuh + net start Wazuh + args: + executable: powershell.exe + when: os == 'windows' diff --git a/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_config.yaml b/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_config.yaml new file mode 100644 index 0000000000..0a01730a8f --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_config.yaml @@ -0,0 +1,19 @@ +# REQUIRED VARIABLES +# ------------------- +# +# GENERIC: +# - (String) os: Target operating system +# - (String) config_block: Wazuh configuration block + +- name: Configure ossec.conf linux + become: true + blockinfile: + path: /var/ossec/etc/ossec.conf + insertbefore: + block: "{{ config_block }}" + marker: + when: os == 'linux' + +- name: Configure ossec.conf windows + ansible.windows.win_shell: "{{ config_block }}" + when: os == 'windows' diff --git a/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_internal_options.yaml b/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_internal_options.yaml new file mode 100644 index 0000000000..74910cccc8 --- /dev/null +++ b/tests/system/test_active_response/data/ansible_roles/manage_wazuh_configurations/tasks/write_wazuh_internal_options.yaml @@ -0,0 +1,3 @@ +- name: Configure local_internal_options.conf windows + ansible.windows.win_shell: "{{ config_block }}" + when: os == 'windows' diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/configuration.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/configuration.yaml new file mode 100644 index 0000000000..cabd856127 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/configuration.yaml @@ -0,0 +1,79 @@ +- name: Configure manager environment + hosts: manager + become: true + tasks: + + - name: Configure the active-response module + include_role: + name: manage_wazuh_configurations + tasks_from: write_wazuh_config.yaml + vars: + config_block: | + + netsh + local + 5716 + 5 + + os: linux + + - name: Truncate alert.json + include_role: + name: manage_alerts + tasks_from: truncate_alert_json.yaml + + - name: Restart wazuh-manager + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: linux + +- name: Configure Windows agent environment + hosts: windows-agent + tasks: + + - name: Create temp folder + win_file: + path: C:\temp\ + state: directory + + - name: Make a backup of ossec.conf + ansible.windows.win_copy: + src: C:\Program Files (x86)\ossec-agent\ossec.conf + dest: C:\temp + remote_src: true + + - name: Add active-response configuration + include_role: + name: manage_wazuh_configurations + tasks_from: write_wazuh_config.yaml + vars: + config_block: | + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`nno" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + os: windows + + - name: Add localfile configuration + include_role: + name: manage_wazuh_configurations + tasks_from: write_wazuh_config.yaml + vars: + config_block: | + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`nC:\temp\test.txt" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`nsyslog" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + os: windows + + - name: Restart wazuh-agent + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: windows diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/generate_events.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/generate_events.yaml new file mode 100644 index 0000000000..728f026e28 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/generate_events.yaml @@ -0,0 +1,46 @@ +- name: Generate events + hosts: windows-agent + vars: + myfile: C:\temp\test.txt + log: "Dec 10 01:02:02 host sshd[1234]: Failed none for root from 192.168.56.11 port 1066 ssh2" + tasks: + + - name: Enable or disable firewall for Domain profiles + community.windows.win_firewall: + state: "{{ firewall_domain_status }}" + profiles: + - Domain + + - name: Enable or disable firewall for Private profiles + community.windows.win_firewall: + state: "{{ firewall_private_status }}" + profiles: + - Private + + - name: Enable or disable firewall for Public profiles + community.windows.win_firewall: + state: "{{ firewall_public_status }}" + profiles: + - Public + + - name: Insert log into test.txt file + win_shell: | + "{{ log }}" | Out-File "{{ myfile }}" -Append -Encoding ASCII + +- name: Get alerts file + hosts: manager + tasks: + + - name: Wait for expected alert + block: + + - name: Waiting for alert + wait_for: + timeout: 20 + + always: + + - name: Get alert json + include_role: + name: manage_alerts + tasks_from: get_alert_json.yaml diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/teardown.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/teardown.yaml new file mode 100644 index 0000000000..e2c05f5ecb --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/playbooks/teardown.yaml @@ -0,0 +1,42 @@ +- name: Cleanup manager environment + hosts: manager + become: true + tasks: + + - name: Remove the active-response block + blockinfile: + path: /var/ossec/etc/ossec.conf + marker: + state: absent + + - name: Restart wazuh-manager + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: linux + +- name: Cleanup Windows agent environment + hosts: windows-agent + tasks: + + - name: Restore ossec.conf without changes + win_copy: + src: C:\temp\ossec.conf + dest: C:\Program Files (x86)\ossec-agent + remote_src: true + + - name: Restart wazuh-agent + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: windows + + - name: Enable firewall for Domain, Public and Private profiles + community.windows.win_firewall: + state: enabled + profiles: + - Domain + - Private + - Public diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/test_cases/cases_firewall_alerts.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/test_cases/cases_firewall_alerts.yaml new file mode 100644 index 0000000000..d9eefe1ccb --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/data/test_cases/cases_firewall_alerts.yaml @@ -0,0 +1,41 @@ +- name: firewall_disabled + description: Set all firewall network to disabled mode. + configuration_parameters: null + metadata: + extra_vars: + firewall_status: disabled + firewall_domain_status: disabled + firewall_public_status: disabled + firewall_private_status: disabled + firewall_status_logs: + - .*"FIREWALL_DOMAIN","status\d":"inactive".* + - .*"FIREWALL_PRIVATE","status\d":"inactive".* + - .*"FIREWALL_PUBLIC","status\d":"inactive".* + +- name: firewall_public_profile_disabled + description: Set public firewall network to disabled mode. + configuration_parameters: null + metadata: + extra_vars: + firewall_status: disabled + firewall_domain_status: enabled + firewall_public_status: disabled + firewall_private_status: enabled + firewall_status_logs: + - .*"FIREWALL_DOMAIN","status\d":"active".* + - .*"FIREWALL_PRIVATE","status\d":"active".* + - .*"FIREWALL_PUBLIC","status\d":"inactive".* + +- name: firewall_enabled + description: Set all firewall network to enabled mode. + configuration_parameters: null + metadata: + extra_vars: + firewall_status: enabled + firewall_domain_status: enabled + firewall_public_status: enabled + firewall_private_status: enabled + firewall_status_logs: + - .*"FIREWALL_DOMAIN","status\d":"active".* + - .*"FIREWALL_PRIVATE","status\d":"active".* + - .*"FIREWALL_PUBLIC","status\d":"active".* diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/test_firewall_alerts.py b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/test_firewall_alerts.py new file mode 100644 index 0000000000..025ad2ee07 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_alerts/test_firewall_alerts.py @@ -0,0 +1,78 @@ +import os +import pytest +from tempfile import gettempdir + +import wazuh_testing as fw +from wazuh_testing.tools import configuration as config +from wazuh_testing import event_monitor as evm + + +# Test cases data +test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +test_cases_file_path = os.path.join(test_data_path, 'test_cases', 'cases_firewall_alerts.yaml') +alerts_json = os.path.join(gettempdir(), 'alerts.json') + +# Playbooks +configuration_playbooks = ['configuration.yaml'] +events_playbooks = ['generate_events.yaml'] +teardown_playbooks = ['teardown.yaml'] + +# Configuration +configurations, configuration_metadata, cases_ids = config.get_test_cases_data(test_cases_file_path) + + +@pytest.mark.parametrize('metadata', configuration_metadata, ids=cases_ids) +def test_firewall_alerts(configure_environment, metadata, generate_events): + ''' + description: Check that an alert is generated when the firewall is disabled. + + test_phases: + - setup: + - Load Wazuh light configuration. + - Apply ossec.conf configuration changes according to the configuration template and use case. + - Truncate wazuh logs. + - Restart wazuh-manager service to apply configuration changes. + - Change status to the Windows firewall. + - Insert log into a monitorized file + - test: + - Check in the alerts.json that an alerts of firewall disabled are generated when the firewall is disabled. + - Check in the alerts.json that no alerts of firewall enabled are generated when the firewall is enabled. + - teardown: + - Restore initial configuration, ossec.conf. + + wazuh_min_version: 4.5.0 + + tier: 0 + + parameters: + - configurate_environment: + type: fixture + brief: Set the wazuh configuration according to the configuration playbook. + - metadata: + type: dict + brief: Wazuh configuration metadata. + - generate_events: + type: fixture + brief: Generate RDP attack to the agent and copy the ossec.log and active-responses.log file to a specific + local folder to be analyzed. + + assertions: + - Verify that the logs have been generated. + + input_description: + - The `configuration.yaml` file provides the module configuration for this test. + - The `generate_events.yaml` file provides the function to copy the log file to a temporary folder, provides + the playbook to insert the log into the test files and, enable/disable the firewall. + ''' + status = metadata['extra_vars']['firewall_status'] + for expected_log in metadata['extra_vars']['firewall_status_logs']: + if 'disabled' in status: + # When the firewall is inactive the alerts.json contains messages about firewall inactive status. + evm.check_event(callback=expected_log, file_to_monitor=alerts_json, timeout=fw.T_5, + error_message=f"Could not find the event '{expected_log}' in alerts.json file") + else: + # When the firewall is active the alerts.json file does not contain any message about firewall status. + with pytest.raises(TimeoutError): + evm.check_event(callback=expected_log, file_to_monitor=alerts_json, timeout=fw.T_5, + error_message=f"Could not find the event '{expected_log}' in alerts.json file") + raise AttributeError(f"The alert '{expected_log}' was generated unexpectedly") diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/configuration.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/configuration.yaml new file mode 100644 index 0000000000..74e1f9efa5 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/configuration.yaml @@ -0,0 +1,90 @@ +- name: Configure manager environment + hosts: manager + become: true + tasks: + + - name: Update packages list (Ubuntu) + ansible.builtin.apt: + update_cache: true + when: ansible_facts['distribution'] == "Ubuntu" + + # Install hydra to attempt the RDP brute force attack + - name: Install hydra (Ubuntu) + package: + name: hydra=9.2-1ubuntu1 + state: present + when: ansible_facts['distribution'] == "Ubuntu" + + - name: Update packages list (CentOS) + ansible.builtin.yum: + update_cache: true + when: ansible_facts['distribution'] == "CentOS" + + - name: Install hydra (CentOS) + become: true + package: + name: hydra + state: present + when: ansible_facts['distribution'] == "CentOS" + + - name: Configure the active-response module + include_role: + name: manage_wazuh_configurations + tasks_from: write_wazuh_config.yaml + vars: + config_block: | + + no + netsh + all + 60122 + 5 + + os: linux + + - name: Restart wazuh-manager + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: linux + +- name: Configure Windows agent environment + hosts: windows-agent + tasks: + + - name: Create temp folder + win_file: + path: C:\temp + state: directory + + - name: Make a backup of ossec.conf + ansible.windows.win_copy: + src: C:\Program Files (x86)\ossec-agent\ossec.conf + dest: C:\temp + remote_src: true + + - name: Add active-response configuration + include_role: + name: manage_wazuh_configurations + tasks_from: write_wazuh_config.yaml + vars: + config_block: | + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`nno" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + Add-Content 'C:\Program Files (x86)\ossec-agent\ossec.conf' "`n" + os: windows + + - name: Truncate active-responses.log + win_file: + path: C:\Program Files (x86)\ossec-agent\active-response\active-responses.log + state: absent + + - name: Restart wazuh-agent + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: windows diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/generate_events.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/generate_events.yaml new file mode 100644 index 0000000000..6656fc5861 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/generate_events.yaml @@ -0,0 +1,39 @@ +- name: Enable/Disable firewall + hosts: windows-agent + tasks: + + - name: Enable or disable firewall for Domain, Public and Private profiles + community.windows.win_firewall: + state: "{{ firewall_status }}" + profiles: + - Domain + - Private + - Public + +- name: Generate events + hosts: manager + tasks: + + - name: Attempt a RDP brute force attack + shell: hydra -l {{ item }} -p invalid_password rdp://{{ hostvars['windows-agent']['ansible_host'] }} + loop: + - test_user + + - name: Wait for expected log + block: + + - name: Waiting for log + wait_for: + timeout: 5 + +- name: Copy log file to a tmp folder + hosts: windows-agent + tasks: + + - name: Get active-responses log + include_role: + name: manage_logs + tasks_from: get_log_file.yaml + vars: + dest_path: test_firewall_status/ + source_path: C:\Program Files (x86)\ossec-agent\active-response\active-responses.log diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/teardown.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/teardown.yaml new file mode 100644 index 0000000000..e2c05f5ecb --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/playbooks/teardown.yaml @@ -0,0 +1,42 @@ +- name: Cleanup manager environment + hosts: manager + become: true + tasks: + + - name: Remove the active-response block + blockinfile: + path: /var/ossec/etc/ossec.conf + marker: + state: absent + + - name: Restart wazuh-manager + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: linux + +- name: Cleanup Windows agent environment + hosts: windows-agent + tasks: + + - name: Restore ossec.conf without changes + win_copy: + src: C:\temp\ossec.conf + dest: C:\Program Files (x86)\ossec-agent + remote_src: true + + - name: Restart wazuh-agent + include_role: + name: manage_wazuh + tasks_from: restart_wazuh.yaml + vars: + os: windows + + - name: Enable firewall for Domain, Public and Private profiles + community.windows.win_firewall: + state: enabled + profiles: + - Domain + - Private + - Public diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/test_cases/cases_firewall_status.yaml b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/test_cases/cases_firewall_status.yaml new file mode 100644 index 0000000000..81475cc694 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/data/test_cases/cases_firewall_status.yaml @@ -0,0 +1,21 @@ +- name: firewall_disabled + description: Set all firewall network to disabled mode. + configuration_parameters: null + metadata: + extra_vars: + firewall_status: disabled + firewall_status_logs: + - .*"FIREWALL_DOMAIN","status\d":"inactive".* + - .*"FIREWALL_PRIVATE","status\d":"inactive".* + - .*"FIREWALL_PUBLIC","status\d":"inactive".* + +- name: firewall_enabled + description: Set all firewall network to enabled mode. + configuration_parameters: null + metadata: + extra_vars: + firewall_status: enabled + firewall_status_logs: + - .*"FIREWALL_DOMAIN","status\d":"active".* + - .*"FIREWALL_PRIVATE","status\d":"active".* + - .*"FIREWALL_PUBLIC","status\d":"active".* diff --git a/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/test_firewall_status.py b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/test_firewall_status.py new file mode 100644 index 0000000000..8ceaf4c293 --- /dev/null +++ b/tests/system/test_active_response/test_netsh_windows_command/test_firewall_status/test_firewall_status.py @@ -0,0 +1,82 @@ +import os +import pytest +from tempfile import gettempdir + +import wazuh_testing as fw +from wazuh_testing.tools import configuration as config +from wazuh_testing import event_monitor as evm + + +# Test cases data +test_data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data') +test_cases_file_path = os.path.join(test_data_path, 'test_cases', 'cases_firewall_status.yaml') +active_responses_log = os.path.join(gettempdir(), 'test_firewall_status', 'active-responses.log') + +# Playbooks +configuration_playbooks = ['configuration.yaml'] +events_playbooks = ['generate_events.yaml'] +teardown_playbooks = ['teardown.yaml'] + +# Configuration +configurations, configuration_metadata, cases_ids = config.get_test_cases_data(test_cases_file_path) + + +@pytest.mark.parametrize('metadata', configuration_metadata, ids=cases_ids) +def test_firewall_status(metadata, configure_environment, generate_events): + ''' + description: Check that active-response detect that the firewall is disabled/enabled when and hydra attack is + performed. + + test_phases: + - setup: + - Load Wazuh light configuration. + - Apply ossec.conf configuration changes according to the configuration template and use case. + - Apply custom settings in local_internal_options.conf. + - Truncate wazuh logs. + - Restart wazuh-manager service to apply configuration changes. + - Change status to the Windows firewall. + - Generate an RDP attack to the Windows agent + - test: + - Check in the active-responses.log that a log for firewall disabled is generated when the firewall is + disabled. + - Check in the active-responses.log that no log for firewall enabled is generated when the firewall is + enabled. + - teardown: + - Restore initial configuration, ossec.conf. + + wazuh_min_version: 4.5.0 + + tier: 0 + + parameters: + - configurate_environment: + type: fixture + brief: Set the wazuh configuration according to the configuration playbook. + - metadata: + type: dict + brief: Wazuh configuration metadata. + - generate_events: + type: fixture + brief: Generate RDP attack to the agent and copy the active-responses.log file to a specific local folder + to be analyzed. + + assertions: + - Verify that the logs have been generated. + + input_description: + - The `configuration.yaml` file provides the module configuration for this test. + - The `generate_events.yaml` file provides the function to copy the log file to a temporary folder and + provides the playbook to generate the RDP attack. + ''' + status = metadata['extra_vars']['firewall_status'] + for expected_log in metadata['extra_vars']['firewall_status_logs']: + if 'disabled' in status: + # When the firewall is inactive the alerts.json contains messages about firewall inactive status. + evm.check_event(callback=expected_log, file_to_monitor=active_responses_log, timeout=fw.T_5, + error_message=f"Could not find the event '{expected_log}' in active-responses.log file") + else: + # When the firewall is active the alerts.json file does not contain any message about firewall status. + with pytest.raises(TimeoutError): + evm.check_event(callback=expected_log, file_to_monitor=active_responses_log, timeout=fw.T_5, + error_message=f"Could not find the event '{expected_log}' in active-responses.log file") + raise AttributeError(f"The log '{expected_log}' was generated unexpectedly") diff --git a/tests/system/test_cluster/test_agent_groups/__init__.py b/tests/system/test_cluster/test_agent_groups/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/tests/system/test_cluster/test_agent_groups/data/test_group_sync_cases.yml b/tests/system/test_cluster/test_agent_groups/data/test_group_sync_cases.yml new file mode 100644 index 0000000000..0bd58a0a38 --- /dev/null +++ b/tests/system/test_cluster/test_agent_groups/data/test_group_sync_cases.yml @@ -0,0 +1,21 @@ +- + name: 'delete_folder_master' + test_case: + host: 'wazuh-master' + group_deleted: 'group_master' + first_time_check: 'syncreq' + second_time_check: 'synced' +- + name: 'delete_folder_worker1' + test_case: + host: 'wazuh-worker1' + group_deleted: 'group_worker1' + first_time_check: 'synced' + second_time_check: 'synced' +- + name: 'delete_folder_worker2' + test_case: + host: 'wazuh-worker2' + group_deleted: 'group_worker2' + first_time_check: 'synced' + second_time_check: 'synced' diff --git a/tests/system/test_cluster/test_agent_groups/test_group_sync_status.py b/tests/system/test_cluster/test_agent_groups/test_group_sync_status.py new file mode 100644 index 0000000000..0d21de81b3 --- /dev/null +++ b/tests/system/test_cluster/test_agent_groups/test_group_sync_status.py @@ -0,0 +1,179 @@ +''' +copyright: Copyright (C) 2015-2023, Wazuh Inc. + Created by Wazuh, Inc. . + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 +type: system +brief: Wazuh manager handles agent groups. + If a group is deleted from a master cluster, there will be an instance where the agents require a resynchronization (syncreq). + If the group is deleted from a worker cluster, the cluster master will take care of reestablishing the group structure without the need for resynchronization. + This test suite tests the correct functioning of the mentioned use case. + +tier: 0 +modules: + - enrollment +components: + - manager + - agent +daemons: + - wazuh-authd + - wazuh-agentd +os_platform: + - linux +os_version: + - Debian Buster +references: + - https://documentation.wazuh.com/current/user-manual/registering/agent-enrollment.html +''' + +import os +import pytest +import time +from wazuh_testing.tools import WAZUH_PATH +from wazuh_testing.tools.file import read_file, read_yaml +from wazuh_testing.tools.system import HostManager +from system import assign_agent_to_new_group, create_new_agent_group, delete_group_of_agents, execute_wdb_query + +pytestmark = [pytest.mark.cluster, pytest.mark.enrollment_cluster_env] + +testinfra_hosts = ['wazuh-master', 'wazuh-worker1', 'wazuh-worker2', 'wazuh-agent1', 'wazuh-agent2'] +groups = ['group_master', 'group_worker1', 'group_worker2'] +agents = ['wazuh-agent1', 'wazuh-agent2'] +workers = ['wazuh-worker1', 'wazuh-worker2'] +groups_created = [] +first_time_check = "synced" +second_time_check = "synced" +network = {} + +inventory_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), + 'provisioning', 'enrollment_cluster', 'inventory.yml') +host_manager = HostManager(inventory_path) +local_path = os.path.dirname(os.path.abspath(__file__)) +test_cases_yaml = read_yaml(os.path.join(local_path, 'data/test_group_sync_cases.yml')) +wdb_query = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'script/wdb-query.py') +agent_conf_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), + '..', '..' ,'provisioning', 'enrollment_cluster', 'roles', 'agent-role', 'files', 'ossec.conf') + +def get_ip_directions(): + global network + for host in testinfra_hosts: + network[host] = host_manager.get_host_ip(host, 'eth0') + +def delete_all_groups(): + for group in groups: + delete_group_of_agents(testinfra_hosts[0],group,host_manager) + +def query_database(): + query = "global 'sql select group_sync_status from agent;'" + response = execute_wdb_query(query, testinfra_hosts[0], host_manager) + return response + +def first_check(): + global first_time_check + first_time_check = "synced" + s_time = 15 + for i in range(s_time): + time.sleep(0.25) + result = query_database() + if 'syncreq' in result: + first_time_check = "syncreq" + +def second_check(): + time.sleep(10) + global second_time_check + second_time_check = "synced" + result = query_database() + if 'syncreq' in result: + second_time_check = "syncreq" + +@pytest.fixture +def network_configuration(): + get_ip_directions() + for worker in workers: + old_agent_configuration = read_file(agent_conf_file) + new_configuration = old_agent_configuration.replace('
MANAGER_IP
', + f"
{network[worker][0]}
") + + host_manager.modify_file_content(host=agents[worker.index(worker)], path=f'{WAZUH_PATH}/etc/ossec.conf', + content=new_configuration) + host_manager.get_host(testinfra_hosts[0]).ansible('command', f'service wazuh-manager restart', check=False) + for agent in agents: + host_manager.get_host(agent).ansible('command', f'service wazuh-agent restart', check=False) + +@pytest.fixture +def group_creation(): + delete_all_groups() + for group in groups: + create_new_agent_group(testinfra_hosts[0], group, host_manager) + +@pytest.fixture +def agent_group_assignation(): + agent_ids = host_manager.run_command(testinfra_hosts[0], f'cut -c 1-3 {WAZUH_PATH}/etc/client.keys').split() + for group in groups: + for agent_id in agent_ids: + assign_agent_to_new_group(testinfra_hosts[0], group, agent_id, host_manager) + +@pytest.fixture +def delete_group_folder(test_case): + groups_created = host_manager.run_command(testinfra_hosts[0], f'{WAZUH_PATH}/bin/agent_groups') + if test_case['test_case']['group_deleted'] in groups_created: + host_manager.run_command(test_case['test_case']['host'], f"rm -r {WAZUH_PATH}/etc/shared/{test_case['test_case']['group_deleted']} -f") + +@pytest.fixture +def wait_end_initial_syncreq(): + result = query_database() + while 'syncreq' in result: + time.sleep(1) + result = query_database() + +@pytest.mark.parametrize('test_case', [cases for cases in test_cases_yaml], ids=[cases['name'] + for cases in test_cases_yaml]) + +def test_group_sync_status(test_case, network_configuration, + group_creation, agent_group_assignation, + wait_end_initial_syncreq, delete_group_folder): + + ''' + description: Delete a group folder in wazuh server cluster and check group_sync status in 2 times. + wazuh_min_version: 4.4.0 + parameters: + - test_case: + type: list + brief: List of tests to be performed. + - network_configuration + type: function + brief: Delete logs generally talking + - group_creation: + type: function + brief: Delete and create from zero all the groups that are going to be used for testing + - agent_group_assignation: + type: function + brief: Assign agents to groups + - wait_end_initial_syncreq: + type: function + brief: Wait until syncreqs related with the test-environment setting get neutralized + - delete_group_folder: + type: function + brief: Delete the folder-group assigned by test case (trigger) + + assertions: + - Verify that group_sync status changes according the trigger. + + input_description: Different use cases are found in the test module and include parameters. + + expected_output: + - If the group-folder is deleted from master cluster, it is expected to find a syncreq group_sync status until it gets synced. + - If the group-folder is deletef rom a worker cluster, it is expected that master cluster recreates groups without syncreq status. + ''' + #Checks + first_check() + second_check() + + #Results + assert test_case['test_case']['first_time_check'] == first_time_check + assert test_case['test_case']['second_time_check'] == second_time_check + + + + + + diff --git a/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yaml b/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yaml new file mode 100644 index 0000000000..a106bcc5a8 --- /dev/null +++ b/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yaml @@ -0,0 +1,4 @@ +wazuh-agent1: + - regex: .*Finished FIM sync. + path: /var/ossec/logs/ossec.log + timeout: 60 diff --git a/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yml b/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yml deleted file mode 100644 index d5e627e9ef..0000000000 --- a/tests/system/test_fim/test_synchronization/data/agent_initializing_synchronization.yml +++ /dev/null @@ -1,4 +0,0 @@ -wazuh-agent1: - - regex: ".*Initializing FIM Integrity Synchronization check. (.+)$" - path: "/var/ossec/logs/ossec.log" - timeout: 60 \ No newline at end of file diff --git a/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yaml b/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yaml new file mode 100644 index 0000000000..3ea9f7ddbd --- /dev/null +++ b/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yaml @@ -0,0 +1,4 @@ +wazuh-manager: + - regex: .*Finished FIM sync. + path: /var/ossec/logs/ossec.log + timeout: 60 diff --git a/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yml b/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yml deleted file mode 100644 index 804ea2a01d..0000000000 --- a/tests/system/test_fim/test_synchronization/data/manager_initializing_synchronization.yml +++ /dev/null @@ -1,4 +0,0 @@ -wazuh-manager: - - regex: ".*Initializing FIM Integrity Synchronization check. (.+)$" - path: "/var/ossec/logs/ossec.log" - timeout: 60 \ No newline at end of file diff --git a/tests/system/test_fim/test_synchronization/test_synchronization.py b/tests/system/test_fim/test_synchronization/test_synchronization.py index 87565a3d41..21395031e7 100644 --- a/tests/system/test_fim/test_synchronization/test_synchronization.py +++ b/tests/system/test_fim/test_synchronization/test_synchronization.py @@ -62,8 +62,8 @@ messages_path = [os.path.join(local_path, 'data/messages.yml'), os.path.join(local_path, 'data/delete_message.yml'), os.path.join(local_path, 'data/wait_fim_scan.yml'), - os.path.join(local_path, 'data/agent_initializing_synchronization.yml'), - os.path.join(local_path, 'data/manager_initializing_synchronization.yml') + os.path.join(local_path, 'data/agent_initializing_synchronization.yaml'), + os.path.join(local_path, 'data/manager_initializing_synchronization.yaml') ] tmp_path = os.path.join(local_path, 'tmp') scheduled_mode = 'testdir1' @@ -103,7 +103,7 @@ def test_synchronization(folder_path, case, host): expected_output: - Different test cases are contained in external YAML file - (agent_initializing_synchronization.yml and manager_initializing_synchronization.yml) + (agent_initializing_synchronization.yml and manager_initializing_synchronization.yaml) tags: - fim_basic_usage - scheduled diff --git a/tests/system/test_group_sync_status/data/test_group_sync_cases.yml b/tests/system/test_group_sync_status/data/test_group_sync_cases.yml new file mode 100644 index 0000000000..23dcbeda6a --- /dev/null +++ b/tests/system/test_group_sync_status/data/test_group_sync_cases.yml @@ -0,0 +1,21 @@ +- + name: 'delete_folder_master' + test_case: + host: 'wazuh-master' + g_deleted: 'g_master' + first_time: 'syncreq' + second_time: 'synced' +- + name: 'delete_folder_worker1' + test_case: + host: 'wazuh-worker1' + g_deleted: 'g_worker1' + first_time: 'synced' + second_time: 'synced' +- + name: 'delete_folder_worker2' + test_case: + host: 'wazuh-worker2' + g_deleted: 'g_worker2' + first_time: 'synced' + second_time: 'synced' diff --git a/tests/system/test_group_sync_status/test_group_sync_status.py b/tests/system/test_group_sync_status/test_group_sync_status.py new file mode 100644 index 0000000000..45721f0d1c --- /dev/null +++ b/tests/system/test_group_sync_status/test_group_sync_status.py @@ -0,0 +1,202 @@ +''' +copyright: Copyright (C) 2015-2021, Wazuh Inc. + Created by Wazuh, Inc. . + This program is free software; you can redistribute it and/or modify it under the terms of GPLv2 +type: system +brief: Wazuh manager handles agent groups. + If a group is deleted from a master cluster, there will be an instance where the agents require a resynchronization (syncreq). + If the group is deleted from a worker cluster, the cluster master will take care of reestablishing the group structure without the need for resynchronization. + This test suite tests the correct functioning of the mentioned use case. + +tier: 0 +modules: + - enrollment +components: + - manager + - agent +daemons: + - wazuh-authd + - wazuh-agentd +os_platform: + - linux +os_version: + - Debian Buster +references: + - https://documentation.wazuh.com/current/user-manual/registering/agent-enrollment.html +''' + +import os +from time import sleep + +import pytest +from wazuh_testing.tools import WAZUH_PATH, WAZUH_LOGS_PATH +from wazuh_testing.tools.file import read_file, read_yaml, write_file +from wazuh_testing.tools.monitoring import HostMonitor +from wazuh_testing.tools.system import HostManager +from wazuh_testing.tools.utils import format_ipv6_long +import time +import subprocess +import threading +import pytest + +#Parameters +testinfra_hosts = ['wazuh-master', 'wazuh-worker1','wazuh-worker2'] +groups = ['g_master', 'g_worker1', 'g_worker2'] +agents = ['wazuh-agent1', 'wazuh-agent2'] +workers = ['wazuh-worker1', 'wazuh-worker2'] +groups_created = [] +syncreq = "synced" +end_syncreq = "synced" +network = {} +client_keys = {} + +#Endpoints +inventory_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), + 'provisioning', 'enrollment_cluster', 'inventory.yml') +host_manager = HostManager(inventory_path) +local_path = os.path.dirname(os.path.abspath(__file__)) +test_cases_yaml = read_yaml(os.path.join(local_path, 'data/test_group_sync_cases.yml')) +wdb_query = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'wdb-query.py') + +@pytest.fixture(scope='function') +def delete_logs(): + for infra in testinfra_hosts: + host_manager.clear_file(host=infra, file_path=os.path.join(WAZUH_LOGS_PATH, 'ossec.log')) + for agent in agents: + host_manager.clear_file(host=agent, file_path=os.path.join(WAZUH_LOGS_PATH, 'ossec.log')) + +def get_api_token(): + global host_manager + return host_manager.get_api_token(testinfra_hosts[0]) + +@pytest.fixture(scope='function') +def group_creation(): + #Delete first + delete_AllGroups() + for group in groups: + response = host_manager.make_api_call(host=testinfra_hosts[0], token=get_api_token(), method='POST', + endpoint='/groups', request_body={'group_id': group}) + assert response['status'] == 200, f"Failed to create {group} group: {response}" + #print("Group Created: "+ group) + #print(host_manager.run_command(testinfra_hosts[0], f'{WAZUH_PATH}/bin/agent_groups')) + +def delete_AllGroups(): + global groups_created + groups_created = host_manager.run_command(testinfra_hosts[0], f'{WAZUH_PATH}/bin/agent_groups') + for group in groups: + if group in groups_created: + response = host_manager.make_api_call(host=testinfra_hosts[0], token=get_api_token(), method='DELETE', + endpoint=f'/groups?groups_list={group}') + assert response['status'] == 200, f"Failed to delete {group} group: {response}" + #print("Group Deleted: " + group) + +@pytest.fixture(scope='function') +def agent_groupAssignation(): + agent_ids = host_manager.run_command(testinfra_hosts[0], f'cut -c 1-3 {WAZUH_PATH}/etc/client.keys').split() + for group in groups: + for agent_id in agent_ids: + response = host_manager.make_api_call(host=testinfra_hosts[0], token=get_api_token(), method='PUT', + endpoint=f'/agents/{agent_id}/group/{group}') + assert response['status'] == 200, f"Failed to assign agent {agent_id} in {group} group: {response}" + #print('Agent assigned: ' + agent_id + ' into group: '+ group) + #print(host_manager.run_command(testinfra_hosts[0], f'{WAZUH_PATH}/bin/agent_groups')) + +@pytest.fixture(scope='function') +def delete_group_folder(test_case): + groups_created = host_manager.run_command(testinfra_hosts[0], f'{WAZUH_PATH}/bin/agent_groups') + if test_case['test_case']['g_deleted'] in groups_created: + host_manager.run_command(test_case['test_case']['host'], f"rm -r {WAZUH_PATH}/etc/shared/{test_case['test_case']['g_deleted']} -f") + #print("Group deleted: "+ test_case['test_case']['g_deleted'] + " from node:" + test_case['test_case']['host']) + else: + #print("The group does not exist") + pass + +@pytest.fixture(scope='function') +def wdb_query_creator(): + wdb = read_file(wdb_query) + host_manager.modify_file_content(host=testinfra_hosts[0], path=f'{WAZUH_PATH}/wdb-query.py',content=wdb) + +def query_database(): + query = 'global sql select group_sync_status from agent;' + response= host_manager.run_command(testinfra_hosts[0], f'python3 {WAZUH_PATH}/wdb-query.py "{query}"') + #print(response) + return response + +@pytest.fixture(scope='function') +def check_initial_syncreq(): + result = query_database() + while 'syncreq' in result: + time.sleep(1) + #print("Waiting for syncreq neutralization") + result = query_database() + +@pytest.fixture(scope='function') +def check_afterDel_syncreq(): + global syncreq + syncreq = "synced" + s_time = 15 + for i in range(s_time): + time.sleep(0.25) + result = query_database() + #print('Scan: ' + str((i/s_time)*100)[0:4] + '%') + if 'syncreq' in result: + syncreq = "syncreq" + #print("After delete syncreq detected") + +@pytest.fixture(scope='function') +def check_syncreq_end(): + time.sleep(10) + global end_syncreq + end_syncreq = "synced" + result = query_database() + if 'syncreq' in result: + end_syncreq = "syncreq" + #print("Second time syncreq detected") + +@pytest.mark.parametrize('test_case', [cases for cases in test_cases_yaml], ids=[cases['name'] + for cases in test_cases_yaml]) + +def test_group_sync_status(test_case, + group_creation, agent_groupAssignation, wdb_query_creator, check_initial_syncreq, + delete_group_folder, check_afterDel_syncreq, check_syncreq_end): + ''' + description: Delete a group folder in wazuh server cluster and check group_sync status in 2 times. + wazuh_min_version: 4.4.0 + parameters: + - test_case: + type: list + brief: List of tests to be performed. + - group_creation: + type: function + brief: Delete and create from zero all the groups that are going to be used for testing + - agent_groupAssignation: + type: function + brief: Assign agents to groups + - wdb_query_creator: + type: function + brief: Create the script to query group-sync status + - check_initial_syncreq: + type: function + brief: Wait until syncreqs related with the test-environment setting get neutralized + - delete_group_folder: + type: function + brief: Delete the folder-group assigned by test case (trigger) + - check_afterDel_syncreq: + type: function + brief: Check for group_sync status after the trigger + - check_syncreq_end: + type: function + brief: Check for group_sync changes after 10 seconds + + assertions: + - Verify that group_sync status changes according the trigger. + + input_description: Different use cases are found in the test module and include parameters. + + expected_output: + - If the group-folder is deleted from master cluster, it is expected to find a syncreq group_sync status until it gets synced. + - If the group-folder is deletef rom a worker cluster, it is expected that master cluster recreates groups without syncreq status. + ''' + #Results + assert test_case['test_case']['first_time'] == syncreq + assert test_case['test_case']['second_time'] == end_syncreq \ No newline at end of file diff --git a/tests/system/test_group_sync_status/wdb-query.py b/tests/system/test_group_sync_status/wdb-query.py new file mode 100644 index 0000000000..ee6d808c2b --- /dev/null +++ b/tests/system/test_group_sync_status/wdb-query.py @@ -0,0 +1,40 @@ +#! /usr/bin/python3 +# September 16, 2019 + +# Syntax: wdb-query.py + +from socket import socket, AF_UNIX, SOCK_STREAM +from struct import pack, unpack +from sys import argv, exit +from json import dumps, loads +from json.decoder import JSONDecodeError + +def db_query(query): + WDB = '/var/ossec/queue/db/wdb' + + sock = socket(AF_UNIX, SOCK_STREAM) + sock.connect(WDB) + + msg = query.encode() + sock.send(pack(" 1 and argv[1] in ('-h', '--help')): + print("Syntax: {0} ") + exit(1) + + response = db_query(' '.join(argv[1:])) + print(pretty(response)) diff --git a/version.json b/version.json index 90990b71a5..fbafe447b3 100644 --- a/version.json +++ b/version.json @@ -1,4 +1,4 @@ { - "version": "4.4.0", + "version": "4.6.0", "revision": "1" }