Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early return FileMappedDict initialisation when in readonly mode. #5

Open
wants to merge 5 commits into
base: gc_production_branch_do_not_push
Choose a base branch
from

Conversation

cristiangreco
Copy link

This commit is an attempt to address some memory bloat problems we've
recently observed in our production deployment. The commit in here tries to
optimise the read path (metrics export) by reducing the amount of
temporary objects created when reading and aggregating data from metric
files.

The attached synthetic benchmark simulates two concurrent threads that
write and read from a shared direct_file_store while memory profiling
is enabled. I observed locally a reduction in peak memory usage (best of 3 runs
in each case):

# gc_production_branch_do_not_push
Total allocated: 127.99 MB (2231891 objects)
Total retained:  139.68 kB (609 objects)

# this branch
Total allocated: 85.82 MB (1381432 objects)
Total retained:  139.51 kB (608 objects)

I also ran the data_stores.rb performance benchmark and it outlines no
change from baseline performance. I'd expect the early return
optimisation to provide some advantages when reading an high number of big
files, but such condition is not reproduced in the benchmark.


# frozen_string_literal: true

require "prometheus/client"
require "prometheus/client/data_stores/direct_file_store"
require "prometheus/client/formats/text"

require "memory_profiler"

Prometheus::Client.config.data_store =
    Prometheus::Client::DataStores::DirectFileStore.new(dir: ".")

report = MemoryProfiler.report(allow_files: "direct_file_store.rb") do
  HISTOGRAM = Prometheus::Client.registry.histogram(
    :histogram_metric,
    docstring: "an histogram",
    buckets: [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1],
    labels: %i[label1 label2 label3],
  )

  COUNTER = Prometheus::Client.registry.counter(
    :counter_metric,
    docstring: "a counter",
    labels: %i[label4 label5 label6],
  )

  REGISTRY = Prometheus::Client.registry

  def run
    100.times do
      HISTOGRAM.observe(
        rand(1024),
        labels: {
          label1: "foo" + rand(16).to_s,
          label2: rand(512),
          label3: [:a,"b"].sample,
        },
      )

      COUNTER.increment(
        by: rand(1024),
        labels: {
          label4: "foo" + rand(16).to_s,
          label5: rand(512),
          label6: [:a,"b"].sample,
        },
      )

      Prometheus::Client::Formats::Text.marshal(REGISTRY)
    end
  end

  t1 = Thread.new { run }
  t2 = Thread.new { run }

  t1.join
  t2.join
end

report.pretty_print(scale_bytes: true, detailed_report: true)

During the read path (metrics export) we open all existing metric files
and read through them in order to aggregate metrics across different
processes. This condition is signalled via the 'readonly' flag.
However, when reading non-empty files (which is the normal condition
during metrics export) we end up parsing file content twice: first
during FileMappedDict initialisation, then again in the caller site.

This early return condition saves one full file read and avoids the
memory bloat of unpacking file content twice.
@@ -196,6 +196,8 @@ def initialize(filename, readonly = false)
open_file(filename, readonly)
@used = @f.read(4).unpack('l')[0] if @capacity > 0

return if readonly

if @used > 0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this idea. However, it make usage of this object dangerous.
Basically, this means that if you init with readonly, the only method you can call is all_values
If you call anything else, you'll corrupt the file.

This happens to be fine right now, because in the only case where we use readonly, that's the only method we calll. But i'm worried of not being aware of this if we (or anyone else) ever modify code down the line, this makes it quite brittle / dangerous...

Can we have some sort of @file_map_loaded? instance var that gets set by read_all_values? Then, all the other methods would have to check it and call read_all_values if that var is false.

It's quite dirty, but it'd help with the memory allocation.
Alternatively, we can change all_values so that it doesn't call read_all_values, and uses @positions instead? It'll make its code more complex but accomplish what we're trying to do here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Was focused on the code path followed by exporting/formatting and didn't realise that other methods would break if @positions is not indexed at creation time.

On that point, usage the readonly flag might be a bit confusing here in that it still allows write operations to go through, but that's outside the topic of this PR.

So, I pushed a second commit that reverts my previous change (@positions is again indexed at creation time) and uses @positions in all_values to seek across all values. Things look good, but the peak memory reduction I got in the benchmark attached to this PR is not as much exciting as it was before:

# gc_production_branch_do_not_push
# Total allocated: 127.99 MB (2231891 objects)
# Total retained:  139.68 kB (609 objects)

# Second commit in this PR
Total allocated: 95.93 MB (1566004 objects)
Total retained:  139.51 kB (608 objects)

This is because, while not reading file content twice, we're still creating more short-living objects.

However, this second change leaves read_all_values used only at creation time, so it opens the doors to another optimisation: I pushed a 3rd commit that rewrites read_all_values in order to yield key-position pairs and avoid reading metric values at creation time.

This seems to bring numbers back to the original figures:

# gc_production_branch_do_not_push
# Total allocated: 127.99 MB (2231891 objects)
# Total retained:  139.68 kB (609 objects)

# Second commit in this PR
Total allocated: 95.93 MB (1566004 objects)
Total retained:  139.51 kB (608 objects)

# Third commit in this PR
Total allocated: 86.27 MB (1334209 objects)
Total retained:  140.18 kB (612 objects)

Copy link

@dmagliola dmagliola Apr 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this new version!
I have one further proposal. read_all_values is now only called from one place... Instead of accepting a block and yielding to it for each value... You could just populate @positions from inside read_all_values and remove that indirection.

(sorry for my super late review)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmagliola pushed another commit with the changes you suggested. Would you mind having another pass?

CC @Sinjo

values << [encoded.strip, value, @f.pos - 8]
key = @f.read(padded_len).unpack("A#{padded_len}")[0].strip
@positions[key] = @f.pos
@f.seek(8, :CUR)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooooh, this is so much better
Can we name this method populate_positions, or something like that?

Other than that, 👍 from me
I'd check with @Sinjo, though, it's pretty much his repo now :D

Btw, i'm assuming you've updated this repo to the latest master on the main repo, now that prometheus#95 is merged?
If not, probably a good idea to do that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, i'm assuming you've updated this repo to the latest master on the main repo, now that prometheus#95 is merged? If not, probably a good idea to do that.

Nope, this PR is based on gc_production_branch_do_not_push of this forked repo (which in turn is just a copy of prometheus#95 I think?). Probably better to catchup with @Sinjo to understand if we want to get these changes tested in production.

@dmagliola
Copy link

@cristiangreco Btw, have you checked that this last version is still better for memory? I'm assuming it is, but just in case...

@cristiangreco
Copy link
Author

@cristiangreco Btw, have you checked that this last version is still better for memory? I'm assuming it is, but just in case...

Yep, figures stay the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants