Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the memory is not increase linearly when the HTTPRoutes scales up? #3698

Closed
Tracked by #3693
arkodg opened this issue Jun 28, 2024 · 4 comments · Fixed by #4263
Closed
Tracked by #3693

Why the memory is not increase linearly when the HTTPRoutes scales up? #3698

arkodg opened this issue Jun 28, 2024 · 4 comments · Fixed by #4263
Assignees
Labels
area/ci CI and build related issues area/performance
Milestone

Comments

@arkodg
Copy link
Contributor

arkodg commented Jun 28, 2024

No description provided.

@arkodg arkodg added area/ci CI and build related issues area/performance labels Jun 28, 2024
@arkodg arkodg added this to the v1.1.0 milestone Jun 28, 2024
@arkodg arkodg added the help wanted Extra attention is needed label Jun 28, 2024
@shawnh2 shawnh2 self-assigned this Jun 29, 2024
@shawnh2 shawnh2 removed the help wanted Extra attention is needed label Jun 29, 2024
@arkodg
Copy link
Contributor Author

arkodg commented Jul 1, 2024

thanks @shawnh2 for picking this one up !

@shawnh2
Copy link
Contributor

shawnh2 commented Aug 12, 2024

According to the heap profile, looks like we have two places that consume the most memory:

1. The DeepCopyInto method in XDS ResourceVersionTable

Specifically:

func (t *ResourceVersionTable) DeepCopyInto(out *ResourceVersionTable) {

image
      flat  flat%   sum%        cum   cum%
      196.52MB 24.68% 79.92%   196.52MB 24.68%  reflect.New
   ...
         0     0% 97.46%   253.12MB 31.79%  github.com/envoyproxy/gateway/internal/message.HandleSubscription[go.shape.string,go.shape.*uint8]
         0     0% 97.46%   204.05MB 25.63%  github.com/envoyproxy/gateway/internal/xds/translator/runner.(*Runner).subscribeAndTranslate
         0     0% 97.46%   204.05MB 25.63%  github.com/envoyproxy/gateway/internal/xds/translator/runner.(*Runner).subscribeAndTranslate.func1
         0     0% 97.46%   203.55MB 25.56%  github.com/envoyproxy/gateway/internal/xds/types.(*ResourceVersionTable).DeepCopy
         0     0% 97.46%   203.55MB 25.56%  github.com/envoyproxy/gateway/internal/xds/types.(*ResourceVersionTable).DeepCopyInto
         0     0% 97.46%   222.07MB 27.89%  github.com/telepresenceio/watchable.(*Map[go.shape.string,go.shape.*uint8]).Store
         0     0% 97.46%   222.07MB 27.89%  github.com/telepresenceio/watchable.(*Map[go.shape.string,go.shape.*uint8]).unlockedStore
         0     0% 97.46%   222.07MB 27.89%  github.com/telepresenceio/watchable.DeepCopy[go.shape.*uint8]
         0     0% 97.46%   134.51MB 16.89%  google.golang.org/protobuf/internal/impl.(*MessageInfo).initOneofFieldCoders.func5
         0     0% 97.46%   202.04MB 25.37%  google.golang.org/protobuf/internal/impl.(*MessageInfo).merge
         0     0% 97.46%   202.04MB 25.37%  google.golang.org/protobuf/internal/impl.(*MessageInfo).mergePointer
         0     0% 97.46%   150.01MB 18.84%  google.golang.org/protobuf/internal/impl.mergeMessage
         0     0% 97.46%   201.04MB 25.25%  google.golang.org/protobuf/internal/impl.mergeMessageSlice
         0     0% 97.46%   203.55MB 25.56%  google.golang.org/protobuf/proto.Clone
         0     0% 97.46%   202.54MB 25.44%  google.golang.org/protobuf/proto.mergeOptions.mergeMessage

As you can see, it calls protobuf copy frequently, result in calling reflect.New that takes about 25% of total memory usage.

2. YAML Marshal

image
      flat  flat%   sum%        cum   cum%
  439.84MB 55.24% 55.24%   449.91MB 56.50%  sigs.k8s.io/yaml/goyaml%2ev2.yaml_emitter_emit
   ...
      11MB  1.38% 93.36%    12.50MB  1.57%  sigs.k8s.io/yaml/goyaml%2ev2.(*decoder).scalar
   10.07MB  1.26% 94.63%    10.07MB  1.26%  sigs.k8s.io/yaml/goyaml%2ev2.yaml_string_write_handler
    7.05MB  0.89% 95.51%    97.07MB 12.19%  sigs.k8s.io/yaml/goyaml%2ev2.(*decoder).sequence

No particular code calls, but this part ends up in consuming almost the half of the total memory.

@arkodg
Copy link
Contributor Author

arkodg commented Aug 12, 2024

thanks for surfacing this @shawnh2
I think #3980 should eliminate both these cases

Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ci CI and build related issues area/performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants