Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SOLR-17022: Support for glob patterns for fields in Export handler, Stream handler and with SelectStream streaming expression #1996

Merged
merged 14 commits into from
Dec 22, 2023
Merged
77 changes: 67 additions & 10 deletions solr/core/src/java/org/apache/solr/handler/export/ExportWriter.java
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,13 @@
import java.io.PrintWriter;
import java.lang.invoke.MethodHandles;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
import org.apache.lucene.index.FieldInfo;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.SortedDocValues;
Expand All @@ -53,6 +57,7 @@
import org.apache.solr.common.params.CommonParams;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.params.StreamParams;
import org.apache.solr.common.util.GlobPatternUtil;
import org.apache.solr.common.util.JavaBinCodec;
import org.apache.solr.core.SolrCore;
import org.apache.solr.metrics.SolrMetricsContext;
Expand Down Expand Up @@ -487,19 +492,14 @@ void writeDoc(

public FieldWriter[] getFieldWriters(String[] fields, SolrIndexSearcher searcher)
throws IOException {
IndexSchema schema = searcher.getSchema();
FieldWriter[] writers = new FieldWriter[fields.length];
DocValuesIteratorCache dvIterCache = new DocValuesIteratorCache(searcher, false);
for (int i = 0; i < fields.length; i++) {
String field = fields[i];
SchemaField schemaField = null;

try {
schemaField = schema.getField(field);
} catch (Exception e) {
throw new IOException(e);
}
List<SchemaField> expandedFields = expandFieldList(fields, searcher);

FieldWriter[] writers = new FieldWriter[expandedFields.size()];
for (int i = 0; i < expandedFields.size(); i++) {
SchemaField schemaField = expandedFields.get(i);
String field = schemaField.getName();
if (!schemaField.hasDocValues()) {
throw new IOException(schemaField + " must have DocValues to use this feature.");
}
Expand Down Expand Up @@ -844,4 +844,61 @@ public String getMessage() {
return "Early Client Disconnect";
}
}

/**
* Creates a complete field list using the provided field list by expanding any glob patterns into
* field names
*
* @param fields the original set of fields provided
* @param searcher an index searcher to access schema info
* @return a complete list of fields included any fields matching glob patterns
* @throws IOException if a provided field does not exist or cannot be retrieved from the schema
* info
*/
private List<SchemaField> expandFieldList(String[] fields, SolrIndexSearcher searcher)
throws IOException {
List<SchemaField> expandedFields = new ArrayList<>(fields.length);
Set<String> fieldsProcessed = new HashSet<>();
for (String field : fields) {
try {
if (field.contains("*")) {
getGlobFields(field, searcher, fieldsProcessed, expandedFields);
} else {
if (fieldsProcessed.add(field)) {
expandedFields.add(searcher.getSchema().getField(field));
}
}
} catch (Exception e) {
throw new IOException(e);
}
}

return expandedFields;
}

/**
* Create a list of schema fields that match a given glob pattern
*
* @param fieldPattern the glob pattern to match
* @param searcher an index search to access schema info
* @param fieldsProcessed the set of field names already processed to avoid duplicating
* @param expandedFields the list of fields to add expanded field names into
*/
private void getGlobFields(
String fieldPattern,
SolrIndexSearcher searcher,
Set<String> fieldsProcessed,
List<SchemaField> expandedFields) {
for (FieldInfo fi : searcher.getFieldInfos()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be many fieldInfo, and you're looping over a "matches" call that is going to internally build a regex each time. Maybe you should first do the hasDocValues etc. checks so we do this matches check last?

if (GlobPatternUtil.matches(fieldPattern, fi.getName())) {
SchemaField schemaField = searcher.getSchema().getField(fi.getName());
if (fieldsProcessed.add(fi.getName())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line might add to fieldsProcessed, yet exclude the field because doesn't "hasDocValues". This looks suspicious to me.

&& schemaField.hasDocValues()
&& (!(schemaField.getType() instanceof SortableTextField)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wanted to highlight this because @magibney and I were discussing this previously. The way this is implemented for the Export handler here actually deviates from how glob patterns work for the Select handler.

The Select handler will not match any fields where useDocValuesAsStored=false for glob patterns, those fields must be explicitly provided in the field list.

For the purposes of Export I did not follow that pattern and instead will match any fields that have docvalues enabled with the exception of SortableTextField which require useDocValuesAsStored=true to be used in Export in other places in the code.

My reasoning here is that there is a performance hit for getting non-stored, docvalues enabled fields in the Select handler that doesn't seem to be to be as impactful in the Export handler.

Open to other opinions on this topic, should we:

  1. Have glob patterns for fields in Export handler return any fields that are able to be exported
  2. Match the behavior in the Select handler and only return fields that match the pattern, have docvalues enabled AND have useDocValuesAsStored=true

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vote for consistency if possible, since I think of both /select and /export as the same, just one is faster than the other ;-). If that isn't possible, then maybe if the ref guide makes it really clear "This is why you use /select and ramifications" and "THis is why you use /export and it's ramifications", then that might cover these differences. I can see the bug reports if we aren't clear ;-).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @justinrsweeney foreshadowed, I am in favor of parity in how this is handled between select and export.

The purpose of udvas as I understand it is not simply to mitigate performance issues in select (and in fact, performance should be comparable between select and export as of #1938), but also as a config option to allow the configuration of docValues for a field without requiring that those values be returned when a field matches a glob pattern.

There are plenty of use cases where, e.g., a derivative string field is configured solely for the purpose of sort, faceting, [or other purpose that would make even less sense to return on export]. It should be possible to configure a field that's intended for one of the many other purposes docValues serve, without forcing it to be returned when the field name matches a glob pattern. Note also, if both stored=true and useDocValuesAsStored=true, stored fields will be used for select and docValues will be used for export.

I think what I'm missing is: how does this help usability? Are there cases where one would want docValues to be used for field value retrieval, but could not simply set that field to useDocValuesAsStored=true? And udvas defaults to true anyway (for all fields except SortableTextField) iirc.

I'm curious to hear others' thoughts on this as well!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated this PR to mimic the same behavior as the select handler, where useDocValuesAsStored=false fields will not be returned unless specifically requested. I'm in agreement that consistency here is better.

|| schemaField.useDocValuesAsStored())) {
expandedFields.add(schemaField);
}
}
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
import java.util.Map;
import java.util.Set;
import java.util.function.Supplier;
import org.apache.commons.io.FilenameUtils;
import org.apache.lucene.queries.function.FunctionQuery;
import org.apache.lucene.queries.function.ValueSource;
import org.apache.lucene.queries.function.valuesource.QueryValueSource;
Expand All @@ -37,6 +36,7 @@
import org.apache.solr.common.params.CommonParams;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.GlobPatternUtil;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.transform.DocTransformer;
import org.apache.solr.response.transform.DocTransformers;
Expand Down Expand Up @@ -578,7 +578,7 @@ public boolean wantsField(String name) {
}
for (String s : globs) {
// TODO something better?
if (FilenameUtils.wildcardMatch(name, s)) {
if (GlobPatternUtil.matches(s, name)) {
okFieldNames.add(name); // Don't calculate it again
return true;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1298,6 +1298,43 @@ public void testExpr() throws Exception {
.contains("Must have useDocValuesAsStored='true'"));
}

@Test
public void testGlobFields() throws Exception {
assertU(delQ("*:*"));
assertU(commit());
createLargeIndex();
SolrQueryRequest req =
req("q", "*:*", "qt", "/export", "fl", "id,*_udvas,*_i_p", "sort", "id asc");
assertJQ(
req,
"response/numFound==100000",
"response/docs/[0]/id=='0'",
"response/docs/[1]/id=='1'",
"response/docs/[0]/sortabledv_udvas=='0'",
"response/docs/[1]/sortabledv_udvas=='1'",
"response/docs/[0]/small_i_p==0",
"response/docs/[1]/small_i_p==1");

assertU(delQ("*:*"));
assertU(commit());
createLargeIndex();
req = req("q", "*:*", "qt", "/export", "fl", "*", "sort", "id asc");
assertJQ(
req,
"response/numFound==100000",
"response/docs/[0]/id=='0'",
"response/docs/[1]/id=='1'",
"response/docs/[0]/sortabledv_udvas=='0'",
"response/docs/[1]/sortabledv_udvas=='1'",
"response/docs/[0]/small_i_p==0",
"response/docs/[1]/small_i_p==1");

String jq = JQ(req);
assertFalse(
"Fields without docvalues and useDocValuesAsStored should not be returned",
jq.contains("\"sortabledv\""));
}

@SuppressWarnings("rawtypes")
private void validateSort(int numDocs) throws Exception {
// 10 fields
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,10 @@ It can get worse otherwise.
The `fl` property defines the fields that will be exported with the result set.
Any of the field types that can be sorted (i.e., int, long, float, double, string, date, boolean) can be used in the field list.
The fields can be single or multi-valued.
However, returning scores and wildcards are not supported at this time.

Wildcard patterns can be used for the field list (e.g. `fl=*_i`) and will be expanded to the list of fields that match the pattern and are able to be exported, see <<Field Requirements>>.

Returning scores is not supported at this time.

=== Specifying the Local Streaming Expression

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1375,7 +1375,7 @@ One can provide a list of operations and evaluators to perform on any fields, su
=== select Parameters

* `StreamExpression`
* `fieldName`: name of field to include in the output tuple (can include multiple of these), such as `outputTuple[fieldName] = inputTuple[fieldName]`
* `fieldName`: name of field to include in the output tuple (can include multiple of these), such as `outputTuple[fieldName] = inputTuple[fieldName]`. The `fieldName` can be a wildcard pattern, e.g. `a_*` to select all fields that start with `a_`.
* `fieldName as aliasFieldName`: aliased field name to include in the output tuple (can include multiple of these), such as `outputTuple[aliasFieldName] = incomingTuple[fieldName]`
* `replace(fieldName, value, withValue=replacementValue)`: if `incomingTuple[fieldName] == value` then `outgoingTuple[fieldName]` will be set to `replacementValue`.
`value` can be the string "null" to replace a null value with some other value.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionParser;
import org.apache.solr.client.solrj.io.stream.expr.StreamExpressionValue;
import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
import org.apache.solr.common.util.GlobPatternUtil;

/**
* Selects fields from the incoming stream and applies optional field renaming. Does not reorder the
Expand All @@ -52,14 +53,21 @@ public class SelectStream extends TupleStream implements Expressible {
private TupleStream stream;
private StreamContext streamContext;
private Map<String, String> selectedFields;
private List<String> selectedFieldGlobPatterns;
private Map<StreamEvaluator, String> selectedEvaluators;
private List<StreamOperation> operations;

public SelectStream(TupleStream stream, List<String> selectedFields) throws IOException {
this.stream = stream;
this.selectedFields = new HashMap<>();
this.selectedFieldGlobPatterns = new ArrayList<>();
for (String selectedField : selectedFields) {
this.selectedFields.put(selectedField, selectedField);
if (selectedField.contains("*")) {
// selected field is a glob pattern
this.selectedFieldGlobPatterns.add(selectedField);
} else {
this.selectedFields.put(selectedField, selectedField);
}
}
operations = new ArrayList<>();
selectedEvaluators = new LinkedHashMap<>();
Expand All @@ -68,6 +76,7 @@ public SelectStream(TupleStream stream, List<String> selectedFields) throws IOEx
public SelectStream(TupleStream stream, Map<String, String> selectedFields) throws IOException {
this.stream = stream;
this.selectedFields = selectedFields;
selectedFieldGlobPatterns = new ArrayList<>();
operations = new ArrayList<>();
selectedEvaluators = new LinkedHashMap<>();
}
Expand Down Expand Up @@ -123,6 +132,7 @@ public SelectStream(StreamExpression expression, StreamFactory factory) throws I
stream = factory.constructStream(streamExpressions.get(0));

selectedFields = new HashMap<>();
selectedFieldGlobPatterns = new ArrayList<>();
selectedEvaluators = new LinkedHashMap<>();
for (StreamExpressionParameter parameter : selectAsFieldsExpressions) {
StreamExpressionValue selectField = (StreamExpressionValue) parameter;
Expand Down Expand Up @@ -175,7 +185,11 @@ public SelectStream(StreamExpression expression, StreamFactory factory) throws I
selectedFields.put(asValue, asName);
}
} else {
selectedFields.put(value, value);
if (value.contains("*")) {
selectedFieldGlobPatterns.add(value);
} else {
selectedFields.put(value, value);
}
}
}

Expand Down Expand Up @@ -217,6 +231,11 @@ private StreamExpression toExpression(StreamFactory factory, boolean includeStre
}
}

// selected glob patterns
for (String selectFieldGlobPattern : selectedFieldGlobPatterns) {
expression.addParameter(selectFieldGlobPattern);
}

// selected evaluators
for (Map.Entry<StreamEvaluator, String> selectedEvaluator : selectedEvaluators.entrySet()) {
expression.addParameter(
Expand Down Expand Up @@ -308,6 +327,13 @@ public Tuple read() throws IOException {
workingForEvaluators.put(fieldName, original.get(fieldName));
if (selectedFields.containsKey(fieldName)) {
workingToReturn.put(selectedFields.get(fieldName), original.get(fieldName));
} else {
for (String globPattern : selectedFieldGlobPatterns) {
if (GlobPatternUtil.matches(globPattern, fieldName)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't SelectStream also use SolrReturnFields and not use lower level GlobPattern stuff (it's something SRF can handle)?

Disclaimer: I haven't looked at this PR in a long time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

solrj-streaming does not currently have a dependency on core, in fact currently I think core depends on solrj-streaming. I didn't want to refactor SolrReturnFields to live elsewhere given the scope of this PR so not using that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh right; of course.

workingToReturn.put(fieldName, original.get(fieldName));
break;
}
}
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,14 +105,15 @@ public void testSelectStream() throws Exception {
try (SelectStream stream =
new SelectStream(
StreamExpressionParser.parse(
"select(\"a_s as fieldA\", search(collection1, q=*:*, fl=\"id,a_s,a_i,a_f\", sort=\"a_f asc, a_i asc\"))"),
"select(\"a_s as fieldA\", a_*, search(collection1, q=*:*, fl=\"id,a_s,a_i,a_f\", sort=\"a_f asc, a_i asc\"))"),
factory)) {
expressionString = stream.toExpression(factory).toString();
assertTrue(expressionString.contains("select(search(collection1,"));
assertTrue(expressionString.contains("q=\"*:*\""));
assertTrue(expressionString.contains("fl=\"id,a_s,a_i,a_f\""));
assertTrue(expressionString.contains("sort=\"a_f asc, a_i asc\""));
assertTrue(expressionString.contains("a_s as fieldA"));
assertTrue(expressionString.contains("a_*"));
}
}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.common.util;
justinrsweeney marked this conversation as resolved.
Show resolved Hide resolved

import java.nio.file.FileSystems;
import java.nio.file.Paths;

public class GlobPatternUtil {
dsmiley marked this conversation as resolved.
Show resolved Hide resolved

public static boolean matches(String pattern, String input) {
return FileSystems.getDefault().getPathMatcher("glob:" + pattern).matches(Paths.get(input));
Copy link
Contributor

@dsmiley dsmiley Oct 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is bizarre just for some field wildcard support. If there is a reason we use FileSystems then a comment is necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a look at how it's implemented. If only we could call ZipUtils.toRegexPattern but the class is package protected. It's a shame to recompile the glob on each call to matches!

}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.solr.common.util;
justinrsweeney marked this conversation as resolved.
Show resolved Hide resolved

import org.apache.solr.SolrTestCase;

public class TestGlobPatternUtil extends SolrTestCase {

public void testMatches() {
assertTrue(GlobPatternUtil.matches("*_str", "user_str"));
assertFalse(GlobPatternUtil.matches("*_str", "str_user"));
assertTrue(GlobPatternUtil.matches("str_*", "str_user"));
assertFalse(GlobPatternUtil.matches("str_*", "user_str"));
assertTrue(GlobPatternUtil.matches("str?", "str1"));
assertFalse(GlobPatternUtil.matches("str?", "str_user"));
assertTrue(GlobPatternUtil.matches("user_*_str", "user_type_str"));
assertFalse(GlobPatternUtil.matches("user_*_str", "user_str"));
}
}
Loading