From a6be2cb149a2168ddca63404109810adc52bd5da Mon Sep 17 00:00:00 2001 From: Govind Kamat Date: Tue, 9 Jul 2024 14:24:14 -0700 Subject: [PATCH] Updated README to recommend use of an external data store with large corpora. Signed-off-by: Govind Kamat --- big5/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/big5/README.md b/big5/README.md index a35e9d14..e9e6e2bb 100755 --- a/big5/README.md +++ b/big5/README.md @@ -182,9 +182,10 @@ Running range-auto-date-histo-with-metrics [ ### Considerations when Using the 1 TB Data Corpus -*Caveat*: This corpus is being made available as a feature that is currently being alpha tested. Some points to note when carrying out performance runs using this corpus: +*Caveat*: This corpus is being made available as a feature that is currently in beta test. Some points to note when carrying out performance runs using this corpus: * Due to CloudFront download size limits, the uncompressed size of the 1 TB corpus is actually 0.95 TB (~0.9 TiB). This [issue has been noted](https://github.com/opensearch-project/opensearch-benchmark/issues/543) and will be resolved in due course. + * Use an external data store to record metrics. Using the in-memory store will likely result in the system running out of memory and becoming unresponsive, resulting in inaccurate performance numbers. * Use a load generation host with sufficient disk space to hold the corpus. * Ensure the target cluster has adequate storage and at least 3 data nodes. * Specify an appropriate shard count and number of replicas so that shards are evenly distributed and appropriately sized.