Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: have process_input delegate to process_multiple_input #1792

Merged
merged 1 commit into from
Apr 5, 2024

Conversation

mxinden
Copy link
Collaborator

@mxinden mxinden commented Apr 4, 2024

The Connection::process_input and Connection::process_multiple_input functions are identical, except that the latter handles multiple Datagrams.

To avoid any changes to one without updating the other, have process_input simply delegate to process_multiple_input.

Commit also does the equivalent change to neqo_http3::Http3Client.


I would be surprised if std::iter::once adds a performance hit. Let's see what the benchmarks say.

The `Connection::process_input` and `Connection::process_multiple_input`
functions are identical, except that the latter handles multiple `Datagram`s.

To avoid any changes to one without updating the other, have `process_input`
simply delegate to `process_multiple_input`.

Commit also does the equivalent change to `neqo_http3::Http3Client`.
Copy link

codecov bot commented Apr 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.12%. Comparing base (1dc8ea3) to head (a63c80d).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1792      +/-   ##
==========================================
+ Coverage   93.04%   93.12%   +0.07%     
==========================================
  Files         116      116              
  Lines       36098    36097       -1     
==========================================
+ Hits        33589    33614      +25     
+ Misses       2509     2483      -26     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

github-actions bot commented Apr 5, 2024

Benchmark results

Performance differences relative to a33fe60.

  • coalesce_acked_from_zero 1+1 entries
    time: [197.94 ns 198.48 ns 199.04 ns]
    change: [+2.7004% +3.0993% +3.4984%] (p = 0.00 < 0.05)
    💔 Performance has regressed.

  • coalesce_acked_from_zero 3+1 entries
    time: [238.54 ns 238.95 ns 239.43 ns]
    change: [+0.6338% +0.9377% +1.2685%] (p = 0.00 < 0.05)
    Change within noise threshold.

  • coalesce_acked_from_zero 10+1 entries
    time: [238.59 ns 239.36 ns 240.27 ns]
    change: [+0.3417% +0.7690% +1.1552%] (p = 0.00 < 0.05)
    Change within noise threshold.

  • coalesce_acked_from_zero 1000+1 entries
    time: [219.68 ns 219.85 ns 220.04 ns]
    change: [+0.2062% +1.1037% +1.9807%] (p = 0.01 < 0.05)
    Change within noise threshold.

  • RxStreamOrderer::inbound_frame()
    time: [118.30 ms 118.38 ms 118.48 ms]
    change: [-1.6867% -1.5909% -1.4887%] (p = 0.00 < 0.05)
    💚 Performance has improved.

  • transfer/Run multiple transfers with varying seeds
    time: [121.07 ms 121.33 ms 121.59 ms]
    thrpt: [32.899 MiB/s 32.968 MiB/s 33.039 MiB/s]
    change:
    time: [+1.3341% +1.6258% +1.9459%] (p = 0.00 < 0.05)
    thrpt: [-1.9087% -1.5998% -1.3165%]
    Change within noise threshold.

  • transfer/Run multiple transfers with the same seed
    time: [122.09 ms 122.26 ms 122.43 ms]
    thrpt: [32.673 MiB/s 32.718 MiB/s 32.763 MiB/s]
    change:
    time: [+1.8113% +1.9985% +2.1829%] (p = 0.00 < 0.05)
    thrpt: [-2.1363% -1.9593% -1.7791%]
    Change within noise threshold.

  • 1-conn/1-100mb-resp (aka. Download)/client
    time: [1.1008 s 1.1173 s 1.1391 s]
    thrpt: [87.787 MiB/s 89.504 MiB/s 90.846 MiB/s]
    change:
    time: [-3.3272% -1.7385% +0.1247%] (p = 0.09 > 0.05)
    thrpt: [-0.1246% +1.7692% +3.4417%]
    No change in performance detected.

  • 1-conn/10_000-1b-seq-resp (aka. RPS)/client
    time: [388.16 ms 390.82 ms 393.51 ms]
    thrpt: [25.412 Kelem/s 25.587 Kelem/s 25.763 Kelem/s]
    change:
    time: [-0.9081% +0.1116% +1.0817%] (p = 0.82 > 0.05)
    thrpt: [-1.0702% -0.1114% +0.9164%]
    No change in performance detected.

  • 100-seq-conn/1-1b-resp (aka. HPS)/client
    time: [3.3804 s 3.3833 s 3.3862 s]
    thrpt: [29.532 elem/s 29.557 elem/s 29.582 elem/s]
    change:
    time: [+0.4209% +0.5462% +0.6668%] (p = 0.00 < 0.05)
    thrpt: [-0.6624% -0.5432% -0.4191%]
    Change within noise threshold.

Client/server transfer results

Transfer of 134217728 bytes over loopback.

Client Server CC Pacing Mean [ms] Min [ms] Max [ms] Relative
msquic msquic 824.0 ± 243.1 519.4 1387.3 1.00
neqo msquic reno on 2171.7 ± 249.5 1908.4 2545.3 1.00
neqo msquic reno 2020.4 ± 214.2 1868.1 2433.5 1.00
neqo msquic cubic on 1913.8 ± 48.4 1856.8 2005.3 1.00
neqo msquic cubic 2107.8 ± 290.2 1895.2 2877.7 1.00
msquic neqo reno on 3341.0 ± 171.6 3247.4 3781.4 1.00
msquic neqo reno 3436.0 ± 240.2 3176.5 3783.9 1.00
msquic neqo cubic on 3264.3 ± 74.4 3204.0 3428.2 1.00
msquic neqo cubic 3241.7 ± 115.5 3114.9 3552.4 1.00
neqo neqo reno on 2998.2 ± 212.5 2819.5 3567.0 1.00
neqo neqo reno 2853.3 ± 57.9 2766.1 2928.6 1.00
neqo neqo cubic on 3175.7 ± 192.2 2956.3 3647.7 1.00
neqo neqo cubic 3166.0 ± 169.0 3074.9 3641.4 1.00

⬇️ Download logs

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fairly sure that std::iter::once optimizes out well enough that this won't matter. We have far too much noise in benchmarks to notice, either way.

At this layer of the code, simply avoid duplication is worth the extra overhead.

@larseggert larseggert added this pull request to the merge queue Apr 5, 2024
Merged via the queue into mozilla:main with commit 5dfe106 Apr 5, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants