-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Make batch ingestion automatic, not a parameter on _bulk #14283
[Feature Request] Make batch ingestion automatic, not a parameter on _bulk #14283
Comments
Makes a lot of sense, @andrross ! I support this. |
+1 on the feature request. |
Semantically speaking, because it's not a max, I think |
@dblock Agreed, I want to work through a use case to illustrate why I think this needs to change: Let's say I'm the OpenSearch admin for an organization that uses OpenSearch for log analytics. I have about a dozen different teams writing logs into the cluster I manage. I'm using an ingestion feature that can benefit from batching. The experience I want is that I can test out the new changes, run some performance tests, and then set a "batch_size" parameter in my system configuration that optimizes throughput and latency for my use case. Then I can deploy these changes and all my users see a performance improvement. I do not want to have to chase down a dozen teams, force them to deploy an update to the tool they use to send _bulk requests to the cluster, and get them to configure a good "batch_size". I do not want to repeat this exercise if I change the host type or anything else that results in a different optimal setting for "batch_size". |
@andrross A different default |
Regarding a default batch size other than 1: Different ML servers have different maximum batch size limitations. For example, Cohere requires batch size to be less than 96 and OpenAI requires it to be less than 2048. Exceeding the limit could lead to data loss. I'm afraid that to enable a default batch size other than 1 can't suit all users' situations. Additionally, we already added Based on your suggestion, I think we could do as such:
This method won't give user a performance gain automatically either due to limitation mentioned in the first paragraph. For users who haven't started to use |
Thanks @chishui. Your suggestions above are exactly what I've had in mind here. I realize some of what I suggested above may be unclear. I never intended to suggest that the effective behavior of the sub-batching in the neural-search processors should change. The key point is that the sub-batching of documents into batches smaller than what was provided in the original _bulk request should happen inside the processors and not at the IngestService layer.
If this sub-batching behavior is a common pattern, we could certainly create an |
Is your feature request related to a problem? Please describe
A new batch method was added to
o.o.ingest.Processor
interface in #12457 that allows ingest processor to operate on multiple documents simultaneous, instead of one by one. For certain processors, this can allow for much faster and more efficient processing. However, a newbatch_size
parameter was added to the _bulk API with a default value of 1. This means in order to benefit from batch processing of any of my ingest processors, I have to do at a minimum two this: determine how many documents to include in my _bulk request and determine the optimal value to set for this batch_size parameter. I must also change all my ingestion tooling to support this new batch_size parameter and to specify it.Describe the solution you'd like
I want the developers of my ingestion processors to determine good defaults for how they want to handle batches of documents, and then I can see increased performance with no change to my ingestion tooling by simply updating to the latest software. I acknowledge I may still have to experiment with finding the optimal number of documents to include in each _bulk request (this is the status quo for this API and not specific to ingest processors). Also, certain ingest processors may define expert-level configuration options to further optimize if necessary, but I expect the defaults to work well most of the time and to almost always be better than the performance I saw before batching was implemented.
Related component
Indexing
Additional context
I believe this can be implemented as follows:
batch_size
from 1 toInteger.MAX_VALUE
. This means that by default the entire content of my bulk request will be passed to each ingest processor. However, the default implemention of batchExecute just operates on one document at a time, so unless my ingest processor is updated to leverage the new batchExecute method, I will see exactly the same behavior that existed previously.batch_size
is specified in the _bulk APIbatch_size
parameter in the _bulk API on mainAdditional discussion exists on the original RFC starting here: #12457 (comment)
The text was updated successfully, but these errors were encountered: