Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Dynamic buffer calc] Bug fix: Remove PGs from an administratively down port. #1677

Merged
merged 1 commit into from
Mar 29, 2021

Conversation

stephenxs
Copy link
Collaborator

@stephenxs stephenxs commented Mar 20, 2021

What I did
Bug fixes: Remove PGs from an administratively down port.

Signed-off-by: Stephen Sun stephens@nvidia.com

Why I did it
To fix bugs

How I verified it
Run regression and vs test

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012

Details if related

  • Remove PGs from an administratively down port.
    • Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
    • Remove all PGs when the port is shut down and re-add all configured PGs when port is started up
    • Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down
      • a port's MTU, speed, or cable length is updated
      • a new PG is added to a port or an existing PG is removed from a port
    • Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated
    • Adjust the Lua plugin which calculates the buffer pool size accordingly

Introduce a new state: `PORT_ADMIN_DOWN` which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when the port is started up
Only record the new value but don't touch `BUFFER_PG_TABLE` if the following events come when a port is administratively down
 - a port's MTU, speed, or cable length is updated
 - a new PG is added to a port or an existing PG is removed from a port
Optimize the port event handling flow since `refreshPriorityGroupsForPort` should be called only once in case more than one fields are updated
Optimize the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs stephenxs marked this pull request as ready for review March 20, 2021 06:23
@neethajohn
Copy link
Contributor

@stephenxs , can you move out the lua plugin optimizations out of this PR and keep this PR only for the port down bug fix?

@stephenxs
Copy link
Collaborator Author

@stephenxs , can you move out the lua plugin optimizations out of this PR and keep this PR only for the port down bug fix?

Hi @neethajohn
Actually the lua plugin is also a part of the fix. Probably I need to adjust the wording for optimize to adjust?

@liat-grozovik liat-grozovik merged commit 8eb2326 into sonic-net:master Mar 29, 2021
@stephenxs stephenxs deleted the remove-pg-admin-down branch March 29, 2021 11:06
daall pushed a commit that referenced this pull request Apr 1, 2021
- What I did
Bug fixes: Remove PGs from an administratively down port.

- Why I did it
To fix bugs

- How I verified it
Run regression and vs test

- Details if related
Remove PGs from an administratively down port.
Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when port is started up.
Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down a port's MTU, speed, or cable length is updated a new PG is added to a port or an existing PG is removed from a port.
Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated.
Adjust the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <stephens@nvidia.com>
raphaelt-nvidia pushed a commit to raphaelt-nvidia/sonic-swss that referenced this pull request Oct 5, 2021
- What I did
Bug fixes: Remove PGs from an administratively down port.

- Why I did it
To fix bugs

- How I verified it
Run regression and vs test

- Details if related
Remove PGs from an administratively down port.
Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down.
Remove all PGs when the port is shut down and re-add all configured PGs when port is started up.
Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down a port's MTU, speed, or cable length is updated a new PG is added to a port or an existing PG is removed from a port.
Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated.
Adjust the Lua plugin which calculates the buffer pool size accordingly

Signed-off-by: Stephen Sun <stephens@nvidia.com>
EdenGri pushed a commit to EdenGri/sonic-swss that referenced this pull request Feb 28, 2022
re-enable after fix those tests

Signed-off-by: Guohan Lu <lguohan@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants