Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

services/horizon/internal/ingest: Remove unnecessary use of ChangeCompactor to reduce memory bloat during ingestion #5252

Merged
merged 15 commits into from
Mar 25, 2024

Conversation

tamirms
Copy link
Contributor

@tamirms tamirms commented Mar 20, 2024

PR Checklist

PR Structure

  • This PR has reasonably narrow scope (if not, break it down into smaller PRs).
  • This PR avoids mixing refactoring changes with feature changes (split into two PRs
    otherwise).
  • This PR's title starts with name of package that is most changed in the PR, ex.
    services/friendbot, or all or doc if the changes are broad or impact many
    packages.

Thoroughness

  • This PR adds tests for the most critical parts of the new functionality or fixes.
  • I've updated any docs (developer docs, .md
    files, etc... affected by this change). Take a look in the docs folder for a given service,
    like this one.

Release planning

  • I've updated the relevant CHANGELOG (here for Horizon) if
    needed with deprecations, added features, breaking changes, and DB schema changes.
  • I've decided if this PR requires a new major/minor version according to
    semver, or if it's mainly a patch change. The PR is targeted at the next
    release branch if it's not a patch change.

What

Part of #5258

Horizon users have reported that state rebuilds have caused Horizon's memory usage to exceed 32 GB. After profiling Horizon we discovered that most memory is being retained in the ChangeCompactor:

profile001

The ChangeCompactor is described in detail here https://github.com/stellar/go/blob/master/ingest/change_compactor.go#L49 . It turns out that there is no need to use the ChangeCompactorwhen ingesting from the history archives because all ledger entries changes obtained from the history archives are of the created type which means that we will never compact any ledger entries.

By removing the ChangeCompactor from all the state processors we avoid creating additional copies of ledger entries obtained from the history archives. This has resulted in a substantial reduction in memory usage. Previously we used ~32 GB of RAM to do a state rebuild, now, we are able to do a state rebuild in ~5G:

Screenshot 2024-03-20 at 5 02 50 PM

The only place where it makes sense to use a ChangeCompactor is when processing ledger entry changes extracted from a LedgerCloseMeta payload. It is possible for a ledger entry to be created and then modified or removed within the same ledger.

Known limitations

[N/A]

@tamirms tamirms force-pushed the change-compactor branch 2 times, most recently from d3399d1 to 3621cef Compare March 20, 2024 22:01
@tamirms tamirms marked this pull request as ready for review March 20, 2024 22:20
@tamirms tamirms requested a review from a team March 20, 2024 22:25
@ire-and-curses
Copy link
Member

The only place where it makes sense to use a ChangeCompactor is when processing ledger entry changes extracted from a LedgerCloseMeta payload. It is possible for a ledger entry to be created and then modified or removed within the same ledger.

It's probably worth adding a comment to ChangeCompactor saying something like this, to prevent incorrect usage in future.

Copy link
Contributor

@sreuland sreuland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, great write-up as well on problem/solution in pr description, left some minor comments for consideration.

@tamirms tamirms merged commit 4413131 into stellar:master Mar 25, 2024
29 checks passed
@tamirms tamirms deleted the change-compactor branch March 25, 2024 15:23
@tamirms tamirms mentioned this pull request Apr 17, 2024
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants