Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add two extra lossless queues for bounced back traffic #10496

Merged
merged 18 commits into from
Jun 2, 2022

Conversation

bingwang-ms
Copy link
Contributor

Why I did it

This PR is to add two extra lossless queues for bounced back traffic.
HLD sonic-net/SONiC#950

SKUs include

Arista-7050CX3-32S-C32
Arista-7050CX3-32S-D48C8
Arista-7260CX3-D108C8
Arista-7260CX3-C64
Arista-7260CX3-Q64

How I did it

Update the buffers.json.j2 template and buffers_config.j2 template to generate new BUFFER_QUEUE table.

  • For T1 devices, queue 2 and queue 6 are set as lossless queues on T0 facing ports.
  • For T0 devices, queue 2 and queue 6 are set as lossless queues on T1 facing ports.
  • Queue 7 is added as a new lossy queue as DSCP 48 is mapped to TC 7, and then mapped into Queue 7

How to verify it

  1. Verified by UT
  2. Verified by coping the new template and generate buffer config with sonic-cfggen

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012
  • 202106
  • 202111

Description for the changelog

This PR is to add two extra lossless queues for bounced back traffic.

Link to config_db schema for YANG module changes

A picture of a cute animal (not mandatory but encouraged)

bingwang-ms and others added 2 commits April 7, 2022 03:52
Signed-off-by: bingwang <bingwang@microsoft.com>
Signed-off-by: bingwang <wang.bing@microsoft.com>
@lguohan
Copy link
Collaborator

lguohan commented Apr 9, 2022

can we have an enable/disable flag for this feature?

@bingwang-ms
Copy link
Contributor Author

bingwang-ms commented Apr 11, 2022

can we have an enable/disable flag for this feature?

Do you mean a flag for these extra queues, or the whole QoS remapping feature?
I think we can add a global flag in DEVICE_METADATA table.

"DEVICE_METADATA": {
        "localhost": {
            "mux_tunnel_qos_remapping" : "enable"
        }
    },

If the flag mux_tunnel_qos_remapping is set to enable, then we are going to generate the new qos.config and buffer.config, and then orchanget will consume it. Otherwise nothing will be changed. How do you think?

@bingwang-ms
Copy link
Contributor Author

can we have an enable/disable flag for this feature?

Do you mean a flag for these extra queues, or the whole QoS remapping feature? I think we can add a global flag in DEVICE_METADATA table.

"DEVICE_METADATA": {
        "localhost": {
            "mux_tunnel_qos_remapping" : "enable"
        }
    },

If the flag mux_tunnel_qos_remapping is set to enable, then we are going to generate the new qos.config and buffer.config, and then orchanget will consume it. Otherwise nothing will be changed. How do you think?

I drafted a HLD for adding a FLAGS table. Please see also sonic-net/SONiC#982

Signed-off-by: bingwang <wang.bing@microsoft.com>
@bingwang-ms bingwang-ms force-pushed the update_lossless_queue branch from c6bb62b to 1a2284f Compare April 25, 2022 07:57
Signed-off-by: bingwang <wang.bing@microsoft.com>
files/build_templates/buffers_config.j2 Outdated Show resolved Hide resolved
files/build_templates/buffers_config.j2 Show resolved Hide resolved
files/build_templates/buffers_config.j2 Outdated Show resolved Hide resolved
Signed-off-by: bingwang <wang.bing@microsoft.com>
Signed-off-by: bingwang <wang.bing@microsoft.com>
@bingwang-ms
Copy link
Contributor Author

/azp run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

stephenxs
stephenxs previously approved these changes Apr 28, 2022
@bingwang-ms
Copy link
Contributor Author

/azp run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Signed-off-by: bingwang <wang.bing@microsoft.com>
neethajohn
neethajohn previously approved these changes Apr 28, 2022
@neethajohn
Copy link
Contributor

/azp run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

stephenxs
stephenxs previously approved these changes Apr 29, 2022
@bingwang-ms
Copy link
Contributor Author

/azp run Azure.sonic-buildimage

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

neethajohn
neethajohn previously approved these changes May 23, 2022
@bingwang-ms
Copy link
Contributor Author

@stephenxs @neethajohn The last commit introduced the SYSTEM_DEFAULT table into the template. Please find more details at #10877.
If TunnelQosRemap is not enabled, the template will not be effective, then nothing is changed.
I also added two new PGs 2 and 6 in BUFFER_PG table
https://github.com/Azure/sonic-buildimage/blob/1aeb85b0135ba2d9b0af17842d24ac1b0d8d7abd/device/arista/x86_64-arista_7050cx3_32s/Arista-7050CX3-32S-D48C8/buffers_extra_queues.j2#L37-L53
It's because the port level TC_TO_PG map is changed as required by Broadcom. We are going to map TC 2 -> PG2 and TC 6 -> PG 6 at the port level.
Please feel free to discuss and comment. Thanks

@stephenxs
Copy link
Collaborator

stephenxs commented May 27, 2022

@stephenxs @neethajohn The last commit introduced the SYSTEM_DEFAULT table into the template. Please find more details at #10877. If TunnelQosRemap is not enabled, the template will not be effective, then nothing is changed. I also added two new PGs 2 and 6 in BUFFER_PG table

https://github.com/Azure/sonic-buildimage/blob/1aeb85b0135ba2d9b0af17842d24ac1b0d8d7abd/device/arista/x86_64-arista_7050cx3_32s/Arista-7050CX3-32S-D48C8/buffers_extra_queues.j2#L37-L53

It's because the port level TC_TO_PG map is changed as required by Broadcom. We are going to map TC 2 -> PG2 and TC 6 -> PG 6 at the port level.
Please feel free to discuss and comment. Thanks

@bingwang-ms @neethajohn
So this means for all ports we will have 2 additional PGs?

  • ports between T1 and dual ToR, there are 2 additional lossless PGs
  • other ports, which had only 1 lossy and 2 lossless PGs, have 3 lossy and 2 lossless PGs
    right?

Taking T1 uplinks as an example, we do not have traffic with DSCP 2/6 (mapping to TC 2/6) received on uplink ports. Even if there are such TC => PG mappings, we do not need to allocate buffer for those PGs, right?

Signed-off-by: bingwang <wang.bing@microsoft.com>
@bingwang-ms bingwang-ms requested review from a team as code owners May 30, 2022 08:31
@bingwang-ms
Copy link
Contributor Author

@stephenxs @neethajohn The last commit introduced the SYSTEM_DEFAULT table into the template. Please find more details at #10877. If TunnelQosRemap is not enabled, the template will not be effective, then nothing is changed. I also added two new PGs 2 and 6 in BUFFER_PG table
https://github.com/Azure/sonic-buildimage/blob/1aeb85b0135ba2d9b0af17842d24ac1b0d8d7abd/device/arista/x86_64-arista_7050cx3_32s/Arista-7050CX3-32S-D48C8/buffers_extra_queues.j2#L37-L53

It's because the port level TC_TO_PG map is changed as required by Broadcom. We are going to map TC 2 -> PG2 and TC 6 -> PG 6 at the port level.
Please feel free to discuss and comment. Thanks

@bingwang-ms @neethajohn So this means for all ports we will have 2 additional PGs?

  • ports between T1 and dual ToR, there are 2 additional lossless PGs
  • other ports, which had only 1 lossy and 2 lossless PGs, have 3 lossy and 2 lossless PGs
    right?

Taking T1 uplinks as an example, we do not have traffic with DSCP 2/6 (mapping to TC 2/6) received on uplink ports. Even if there are such TC => PG mappings, we do not need to allocate buffer for those PGs, right?

Make sense. There will be no traffic going lossy PG 2 and 6 of T1 uplink ports. I have removed the change. Thanks

Signed-off-by: bingwang <wang.bing@microsoft.com>
Signed-off-by: bingwang <wang.bing@microsoft.com>
Signed-off-by: bingwang <wang.bing@microsoft.com>
stephenxs
stephenxs previously approved these changes May 31, 2022
Copy link
Collaborator

@stephenxs stephenxs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@neethajohn neethajohn merged commit 1cc602c into sonic-net:master Jun 2, 2022
yxieca pushed a commit that referenced this pull request Jun 22, 2022
Signed-off-by: bingwang <bingwang@microsoft.com>

Why I did it
This PR is to add two extra lossless queues for bounced back traffic.
HLD sonic-net/SONiC#950

SKUs include
Arista-7050CX3-32S-C32
Arista-7050CX3-32S-D48C8
Arista-7260CX3-D108C8
Arista-7260CX3-C64
Arista-7260CX3-Q64

How I did it
Update the buffers.json.j2 template and buffers_config.j2 template to generate new BUFFER_QUEUE table.

For T1 devices, queue 2 and queue 6 are set as lossless queues on T0 facing ports.
For T0 devices, queue 2 and queue 6 are set as lossless queues on T1 facing ports.
Queue 7 is added as a new lossy queue as DSCP 48 is mapped to TC 7, and then mapped into Queue 7

How to verify it
Verified by UT
Verified by coping the new template and generate buffer config with sonic-cfggen
skbarista pushed a commit to skbarista/sonic-buildimage that referenced this pull request Aug 17, 2022
Signed-off-by: bingwang <bingwang@microsoft.com>

Why I did it
This PR is to add two extra lossless queues for bounced back traffic.
HLD sonic-net/SONiC#950

SKUs include
Arista-7050CX3-32S-C32
Arista-7050CX3-32S-D48C8
Arista-7260CX3-D108C8
Arista-7260CX3-C64
Arista-7260CX3-Q64

How I did it
Update the buffers.json.j2 template and buffers_config.j2 template to generate new BUFFER_QUEUE table.

For T1 devices, queue 2 and queue 6 are set as lossless queues on T0 facing ports.
For T0 devices, queue 2 and queue 6 are set as lossless queues on T1 facing ports.
Queue 7 is added as a new lossy queue as DSCP 48 is mapped to TC 7, and then mapped into Queue 7

How to verify it
Verified by UT
Verified by coping the new template and generate buffer config with sonic-cfggen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants