Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ip/tests_mgmt_ipv6_only.py fails during teardown. #12705

Closed
vivekverma-arista opened this issue May 3, 2024 · 1 comment · Fixed by #12825
Closed

ip/tests_mgmt_ipv6_only.py fails during teardown. #12705

vivekverma-arista opened this issue May 3, 2024 · 1 comment · Fixed by #12825
Assignees

Comments

@vivekverma-arista
Copy link
Contributor

vivekverma-arista commented May 3, 2024

Issue Description

Run ip/tests_mgmt_ipv6_only.py.

Results you see

ip/test_mgmt_ipv6_only.py::test_show_features_ipv6_only[ld496] PASSED                                                    [ 23%]
ip/test_mgmt_ipv6_only.py::test_image_download_ipv6_only[ld496] SKIPPED (Cannot get image url)                           [ 30%]
ip/test_mgmt_ipv6_only.py::test_show_features_ipv6_only[ld202] PASSED                                                    [ 38%]
ip/test_mgmt_ipv6_only.py::test_image_download_ipv6_only[ld202] SKIPPED (Cannot get image url)                           [ 46%]
ip/test_mgmt_ipv6_only.py::test_syslog_ipv6_only[fd82:b34f:cc99::100-None] PASSED                                        [ 53%]
ip/test_mgmt_ipv6_only.py::test_syslog_ipv6_only[fd82:b34f:cc99::100-fd82:b34f:cc99::200] PASSED                         [ 61%]
ip/test_mgmt_ipv6_only.py::test_snmp_ipv6_only[ld496] PASSED                                                             [ 69%]
ip/test_mgmt_ipv6_only.py::test_ro_user_ipv6_only[ld496] PASSED                                                          [ 76%]
ip/test_mgmt_ipv6_only.py::test_rw_user_ipv6_only[ld496] PASSED                                                          [ 84%]
ip/test_mgmt_ipv6_only.py::test_telemetry_output_ipv6_only[ld496-True] PASSED                                            [ 92%]
ip/test_mgmt_ipv6_only.py::test_ntp_ipv6_only[True] PASSED                                                               [100%]
------------------------------------------------------ live log teardown -------------------------------------------------------
06:51:13 __init__._fixture_generator_decorator    L0099 ERROR  |
Host unreachable in the inventory
Traceback (most recent call last):
  File "/data/tests/common/plugins/log_section_start/__init__.py", line 95, in _fixture_generator_decorator
    next(it)
  File "/data/tests/telemetry/conftest.py", line 128, in setup_streaming_telemetry
    restore_telemetry_forpyclient(duthost, default_client_auth)
  File "/data/tests/telemetry/telemetry_utils.py", line 70, in restore_telemetry_forpyclient
    client_auth_out = duthost.shell('sonic-db-cli CONFIG_DB HGET "%s|gnmi" "client_auth"' % (env.gnmi_config_table),
  File "/data/tests/common/devices/multi_asic.py", line 128, in _run_on_asics
    return getattr(self.sonichost, self.multi_asic_attr)(*module_args, **complex_args)
  File "/data/tests/common/devices/base.py", line 105, in _run
    res = self.module(*module_args, **complex_args)[self.hostname]
  File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    raise AnsibleConnectionFailure(
pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory

ip/test_mgmt_ipv6_only.py::test_ntp_ipv6_only[True] ERROR                                                                [100%]

Results you expected to see

Test shouldn't fail on teardown. This issue is due to race condition between the fixture teardowns as mentioned in this pull request #12393

Is it platform specific

generic

Relevant log output

________________________________________ ERROR at teardown of test_ntp_ipv6_only[True] _________________________________________
  + Exception Group Traceback (most recent call last):
  |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 341, in from_call
  |     result: Optional[TResult] = func()
  |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 262, in <lambda>
  |     lambda: ihook(item=item, **kwds), when=when, reraise=reraise
  |   File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 493, in __call__
  |     return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
  |   File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 115, in _hookexec
  |     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  |   File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 152, in _multicall
  |     return outcome.get_result()
  |   File "/usr/local/lib/python3.8/dist-packages/pluggy/_result.py", line 114, in get_result
  |     raise exc.with_traceback(exc.__traceback__)
  |   File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 77, in _multicall
  |     res = hook_impl.function(*args)
  |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 182, in pytest_runtest_teardown
  |     item.session._setupstate.teardown_exact(nextitem)
  |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 537, in teardown_exact
  |     raise exceptions[0]
  | exceptiongroup.ExceptionGroup: errors while tearing down <Module test_mgmt_ipv6_only.py> (5 sub-exceptions)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 526, in teardown_exact
    |     fin()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 701, in <lambda>
    |     subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest))
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1031, in finish
    |     raise exc
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1024, in finish
    |     func()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 911, in _teardown_yield_fixture
    |     next(it)
    |   File "/usr/local/lib/python3.8/dist-packages/decorator.py", line 226, in fun
    |     for res in caller(func, *(extras + args), **kw):
    |   File "/data/tests/common/plugins/log_section_start/__init__.py", line 95, in _fixture_generator_decorator
    |     next(it)
    |   File "/data/tests/telemetry/conftest.py", line 128, in setup_streaming_telemetry
    |     restore_telemetry_forpyclient(duthost, default_client_auth)
    |   File "/data/tests/telemetry/telemetry_utils.py", line 70, in restore_telemetry_forpyclient
    |     client_auth_out = duthost.shell('sonic-db-cli CONFIG_DB HGET "%s|gnmi" "client_auth"' % (env.gnmi_config_table),
    |   File "/data/tests/common/devices/multi_asic.py", line 128, in _run_on_asics
    |     return getattr(self.sonichost, self.multi_asic_attr)(*module_args, **complex_args)
    |   File "/data/tests/common/devices/base.py", line 105, in _run
    |     res = self.module(*module_args, **complex_args)[self.hostname]
    |   File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    |     raise AnsibleConnectionFailure(
    | pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory
    +---------------- 2 ----------------
    | Traceback (most recent call last):
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 526, in teardown_exact
    |     fin()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 701, in <lambda>
    |     subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest))
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1031, in finish
    |     raise exc
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1024, in finish
    |     func()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 911, in _teardown_yield_fixture
    |     next(it)
    |   File "/data/tests/conftest.py", line 857, in clear_neigh_entries
    |     dut.command("sudo ip neigh flush nud permanent")
    |   File "/data/tests/common/devices/multi_asic.py", line 128, in _run_on_asics
    |     return getattr(self.sonichost, self.multi_asic_attr)(*module_args, **complex_args)
    |   File "/data/tests/common/devices/base.py", line 105, in _run
    |     res = self.module(*module_args, **complex_args)[self.hostname]
    |   File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    |     raise AnsibleConnectionFailure(
    | pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory
    +---------------- 3 ----------------
    | Traceback (most recent call last):
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 526, in teardown_exact
    |     fin()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 701, in <lambda>
    |     subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest))
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1031, in finish
    |     raise exc
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1024, in finish
    |     func()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 911, in _teardown_yield_fixture
    |     next(it)
    |   File "/data/tests/conftest.py", line 2081, in core_dump_and_config_check
    |     duthost.shell("free -h")
    |   File "/data/tests/common/devices/multi_asic.py", line 128, in _run_on_asics
    |     return getattr(self.sonichost, self.multi_asic_attr)(*module_args, **complex_args)
    |   File "/data/tests/common/devices/base.py", line 105, in _run
    |     res = self.module(*module_args, **complex_args)[self.hostname]
    |   File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    |     raise AnsibleConnectionFailure(
    | pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory
    +---------------- 4 ----------------
    | Traceback (most recent call last):
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 526, in teardown_exact
    |     fin()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 701, in <lambda>
    |     subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest))
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1031, in finish
    |     raise exc
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1024, in finish
    |     func()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 911, in _teardown_yield_fixture
    |     next(it)
    |   File "/data/tests/conftest.py", line 1891, in get_reboot_cause
    |     uptime_end = duthost.get_up_time()
    |   File "/data/tests/common/devices/sonic.py", line 1001, in get_up_time
    |     up_time_text = self.command("uptime -s")["stdout"]
    |   File "/data/tests/common/devices/base.py", line 105, in _run
    |     res = self.module(*module_args, **complex_args)[self.hostname]
    |   File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    |     raise AnsibleConnectionFailure(
    | pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory
    +---------------- 5 ----------------
    | Traceback (most recent call last):
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/runner.py", line 526, in teardown_exact
    |     fin()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 701, in <lambda>
    |     subrequest.node.addfinalizer(lambda: fixturedef.finish(request=subrequest))
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1031, in finish
    |     raise exc
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 1024, in finish
    |     func()
    |   File "/usr/local/lib/python3.8/dist-packages/_pytest/fixtures.py", line 911, in _teardown_yield_fixture
    |     next(it)
    |   File "/data/tests/common/fixtures/duthost_utils.py", line 777, in convert_and_restore_config_db_to_ipv6_only
    |     duthost.shell(f"mv {config_db_bak_file} {config_db_file}")
    |   File "/data/tests/common/devices/multi_asic.py", line 128, in _run_on_asics
    |     return getattr(self.sonichost, self.multi_asic_attr)(*module_args, **complex_args)
    |   File "/data/tests/common/devices/base.py", line 105, in _run
    |     res = self.module(*module_args, **complex_args)[self.hostname]
    |   File "/usr/local/lib/python3.8/dist-packages/pytest_ansible/module_dispatcher/v213.py", line 232, in _run
    |     raise AnsibleConnectionFailure(
    | pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable in the inventory
    +------------------------------------

Output of show version

admin@ld301:~$ show ver

SONiC Software Version: SONiC.branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23
SONiC OS Version: 11
Distribution: Debian 11.9
Kernel: 5.10.0-23-2-amd64
Build commit: 7e17cb782
Build date: Fri Apr 19 13:25:28 UTC 2024
Built by: jenkins@jenkins-arsonic-k8s-6-qs7dh

Platform: x86_64-arista_7050cx3_32s
HwSKU: Arista-7050CX3-32S-C32
ASIC: broadcom
ASIC Count: 1
Serial Number: JPW20280487
Model Number: DCS-7050CX3-32S
Hardware Revision: 01.00
Uptime: 07:13:45 up 1 day,  3:57,  2 users,  load average: 1.64, 0.98, 0.85
Date: Fri 03 May 2024 07:13:45

Docker images:
REPOSITORY                    TAG                                                                            IMAGE ID       SIZE
docker-macsec                 latest                                                                         4dcdcc24e5b7   329MB
docker-dhcp-relay             latest                                                                         8cc2e016b875   310MB
docker-gbsyncd-broncos        branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   e796a098e0d5   352MB
docker-gbsyncd-broncos        latest                                                                         e796a098e0d5   352MB
docker-gbsyncd-credo          branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   c5b41244a098   318MB
docker-gbsyncd-credo          latest                                                                         c5b41244a098   318MB
docker-syncd-brcm             branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   a66b315b06b3   721MB
docker-syncd-brcm             latest                                                                         a66b315b06b3   721MB
docker-teamd                  branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   e854b4581562   327MB
docker-teamd                  latest                                                                         e854b4581562   327MB
docker-snmp                   branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   699c2c7c994f   340MB
docker-snmp                   latest                                                                         699c2c7c994f   340MB
docker-sflow                  branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   e70ddef7bd52   329MB
docker-sflow                  latest                                                                         e70ddef7bd52   329MB
docker-router-advertiser      branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   a90b3db631c5   301MB
docker-router-advertiser      latest                                                                         a90b3db631c5   301MB
docker-platform-monitor       branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   d80d8d582c29   421MB
docker-platform-monitor       latest                                                                         d80d8d582c29   421MB
docker-orchagent              branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   849bd9aac716   339MB
docker-orchagent              latest                                                                         849bd9aac716   339MB
docker-nat                    branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   0b9d069b6911   330MB
docker-nat                    latest                                                                         0b9d069b6911   330MB
docker-mux                    branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   2142c7b2b92a   349MB
docker-mux                    latest                                                                         2142c7b2b92a   349MB
docker-lldp                   branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   eef1c09b8a21   343MB
docker-lldp                   latest                                                                         eef1c09b8a21   343MB
docker-sonic-gnmi             branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   f958b076a0cb   389MB
docker-sonic-gnmi             latest                                                                         f958b076a0cb   389MB
docker-fpm-frr                branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   731c9ffedd1c   359MB
docker-fpm-frr                latest                                                                         731c9ffedd1c   359MB
docker-eventd                 branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   9e605d9c5f6d   301MB
docker-eventd                 latest                                                                         9e605d9c5f6d   301MB
docker-database               branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   acf386b75d42   301MB
docker-database               latest                                                                         acf386b75d42   301MB
docker-sonic-mgmt-framework   branch.202311-ars.4afec29a-buildimage.origin.202311-nightly-2024.04.19.07.23   ba61a4b2bb99   417MB
docker-sonic-mgmt-framework   latest                                                                         ba61a4b2bb99   417MB

No response

Attach files (if any)

No response

@vivekverma-arista vivekverma-arista changed the title [Bug]: ip/tests_mgmt_ipv6_only.py fails during teardown. ip/tests_mgmt_ipv6_only.py fails during teardown. May 3, 2024
@sdszhang
Copy link
Contributor

sdszhang commented May 3, 2024

@vivekverma-arista Can you attach an output with -e "--setup-show" option, and share the output of fixture setup/teardown sequence when this error happened? pls share the command you used to run the test too. Thanks.

@sdszhang sdszhang self-assigned this May 9, 2024
yejianquan pushed a commit that referenced this issue May 14, 2024
…y cases (#12825)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option #1 in this PR as the new 'function' scope fixture can be reused by other cases. Option #2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes #12705


Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
sdszhang added a commit to sdszhang/sonic-mgmt that referenced this issue May 14, 2024
…y cases (sonic-net#12825)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option sonic-net#1 in this PR as the new 'function' scope fixture can be reused by other cases. Option sonic-net#2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes sonic-net#12705

Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
sdszhang added a commit to sdszhang/sonic-mgmt that referenced this issue May 14, 2024
…y cases (sonic-net#12825)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option sonic-net#1 in this PR as the new 'function' scope fixture can be reused by other cases. Option sonic-net#2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes sonic-net#12705

Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
yejianquan pushed a commit that referenced this issue May 15, 2024
…y cases (#12825) (#12845)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option #1 in this PR as the new 'function' scope fixture can be reused by other cases. Option #2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes #12705

Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
yejianquan pushed a commit that referenced this issue May 15, 2024
…y cases (#12825) (#12846)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option #1 in this PR as the new 'function' scope fixture can be reused by other cases. Option #2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes #12705

Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants