-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase ip2me CIR/CBS for faster in-band file transfers #1000
Increase ip2me CIR/CBS for faster in-band file transfers #1000
Conversation
Increase incoming packet rate on in-band interfaces to support faster download of large files. SONiC firmware image download over in-band can take a lot of time if the incoming packet rate is limited to 600pps. This, change increases it to 6000pps. Especially when used by Zero Touch Provisioning or by sonic_installer for firmware upgrade over in-band interfaces. Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
retest this please |
@@ -50,8 +50,8 @@ | |||
"queue": "1", | |||
"meter_type":"packets", | |||
"mode":"sr_tcm", | |||
"cir":"600", | |||
"cbs":"600", | |||
"cir":"6000", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can make the cpu/kernel busy if there is any packets like ping destined to switch. I would assume it was set to 600pps to control such packets. How about setting it to 1000pps instead of increasing it 10x.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code downloads are the pain point. With 1000pps we will see speeds ~ 1.4MB/s and it takes 7 minutes to download a full sonic image compared to 8.4MB/s with 6000pps. I think 6000pps is not that bad. Most of the switches are having an intel CPU and 8.4MB/s b/w for in-band access looks like a number on the conservative side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rajendra-dendukuri what about keep the current value as 600 and leave the user to change the setting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stcheng Are you suggesting that end user changes the setting in source code and re-compile's an image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to some issues related to SAI attribute property and the way copporch was implemented, we skip the reload of new copp json during warm reboot: https://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-orchagent/swssconfig.sh#L44 while assuming that copp config won't change.
Probably it is time to revisit the assumption and find a proper fix.
agree with jipan. |
So. What are the next steps for this? |
we are changing the default value, and there is concern on the cpu impact. can you provide cpu utilization data for various pps limit. what is the limit for knet driver rx per second? Can we get cpu utilization measurement for 3000/6000/10000 pps? warm reboot is another concern, but we can address it in another pr. |
When no policer installed, the stats that were observed before we start seeing drops curl command CPU utilization when performing large file transfer over in-band
|
Can we try to conclude on this? Without these changes, in-band code download doesn't make sense. |
retest vs please |
1 similar comment
retest vs please |
test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rajendra-dendukuri instead of changing the default config, can this dynamically changed only during ztp process?
Given above two reasons, I feel we should increase the default value to a reasonable agreed upon value. |
@rajendra-dendukuri #2 we should block the user configuration during ZTP, since this will have other implication like loosing user configured data when doing ZTP disable. If value need to go beyond the SONiC defined default value, I prefer dynamically change based on the ZTP enable and disable. |
Increase incoming packet rate on in-band interfaces to support faster download of large files. SONiC firmware image download over in-band can take a lot of time if the incoming packet rate is limited to 600pps. This, change increases it to 6000pps. Especially when used by Zero Touch Provisioning or by sonic_installer for firmware upgrade over in-band interfaces. Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
Signed-off-by: Stepan Blyschak stepanb@nvidia.com The motivation for this change is described in the proposal sonic-net/SONiC#935 and proposal in SAI opencomputeproject/SAI#1404 NOTE: Requires to update SAI once opencomputeproject/SAI#1404 is in.
Increase incoming packet rate on in-band interfaces to support faster
download of large files. SONiC firmware image download over in-band can
take a lot of time if the incoming packet rate is limited to 600pps. This,
change increases it to 6000pps. Especially when used by Zero Touch Provisioning
or by sonic_installer for firmware upgrade over in-band interfaces.
Signed-off-by: Rajendra Dendukuri rajendra.dendukuri@broadcom.com
What I did
Increased packet rate for traffic destined to CPU.
Why I did it
This is required for ZTP or sonic_installer to download large files (firmware image) over in-band interface.
How I verified it
sonic_installer install -y "url" over in-band network
Details if related