Skip to content

Commit

Permalink
Merge branch 'main' into refactoring/timestamp_range_unknown_updatev9
Browse files Browse the repository at this point in the history
  • Loading branch information
javanna authored Dec 19, 2024
2 parents d1399d7 + 6983f9a commit c039023
Show file tree
Hide file tree
Showing 103 changed files with 2,506 additions and 523 deletions.
5 changes: 5 additions & 0 deletions docs/changelog/118353.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 118353
summary: Epoch Millis Rounding Down and Not Up 2
area: Infra/Core
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/118603.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 118603
summary: Allow DATE_PARSE to read the timezones
area: ES|QL
type: bug
issues:
- 117680
5 changes: 5 additions & 0 deletions docs/changelog/118941.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 118941
summary: Allow archive and searchable snapshots indices in N-2 version
area: Recovery
type: enhancement
issues: []
11 changes: 10 additions & 1 deletion docs/reference/esql/esql-limitations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ it is necessary to use the search function, like <<esql-match>>, in a <<esql-whe
directly after the <<esql-from>> source command, or close enough to it.
Otherwise, the query will fail with a validation error.
Another limitation is that any <<esql-where>> command containing a full-text search function
cannot also use disjunctions (`OR`).
cannot also use disjunctions (`OR`) unless all functions used in the OR clauses are full-text functions themselves.

For example, this query is valid:

Expand All @@ -139,6 +139,15 @@ FROM books
| WHERE MATCH(author, "Faulkner") OR author LIKE "Hemingway"
----

However this query will succeed because it uses full text functions on both `OR` clauses:

[source,esql]
----
FROM books
| WHERE MATCH(author, "Faulkner") OR QSTR("author: Hemingway")
----


Note that, because of <<esql-limitations-text-fields,the way {esql} treats `text` values>>,
any queries on `text` fields that do not explicitly use the full-text functions,
<<esql-match>> or <<esql-qstr>>, will behave as if the fields are actually `keyword` fields:
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/esql/functions/description/match.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/reference/esql/functions/kibana/definition/match.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 8 additions & 1 deletion docs/reference/esql/functions/kibana/docs/match.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 3 additions & 1 deletion docs/reference/esql/functions/search-functions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@
++++

Full text functions are used to search for text in fields.
<<analysis, Text analysiss>> is used to analyze the query before it is searched.
<<analysis, Text analysis>> is used to analyze the query before it is searched.

Full text functions can be used to match <<esql-multivalued-fields,multivalued fields>>.
A multivalued field that contains a value that matches a full text query is considered to match the query.

Full text functions are significantly more performant for text search use cases on large data sets than using pattern matching or regular expressions with `LIKE` or `RLIKE`

See <<esql-limitations-full-text-search,full text search limitations>> for information on the limitations of full text search.

{esql} supports these full-text search functions:
Expand Down
22 changes: 21 additions & 1 deletion docs/reference/esql/processing-commands/where.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ the input table for which the provided condition evaluates to `true`.

[TIP]
====
In case of value exclusions, fields with `null` values will be excluded from search results.
In case of value exclusions, fields with `null` values will be excluded from search results.
In this context a `null` means either there is an explicit `null` value in the document or there is no value at all.
For example: `WHERE field != "value"` will be interpreted as `WHERE field != "value" AND field IS NOT NULL`.
====
Expand Down Expand Up @@ -58,6 +58,26 @@ For a complete list of all functions, refer to <<esql-functions>>.

include::../functions/predicates.asciidoc[tag=body]

For matching text, you can use <<esql-search-functions,full text search functions>> like `MATCH`.

Use <<esql-match,`MATCH`>> to perform a <<query-dsl-match-query,match query>> on a specified field.

Match can be used on text fields, as well as other field types like boolean, dates, and numeric types.

[source.merge.styled,esql]
----
include::{esql-specs}/match-function.csv-spec[tag=match-with-field]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/match-function.csv-spec[tag=match-with-field-result]
|===

[TIP]
====
You can also use the shorthand <<esql-search-operators,match operator>> `:` instead of `MATCH`.
====

include::../functions/like.asciidoc[tag=body]

include::../functions/rlike.asciidoc[tag=body]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
Expand Down Expand Up @@ -72,17 +73,17 @@ public static void initialize(Instrumentation inst) throws Exception {

Instrumenter instrumenter = INSTRUMENTER_FACTORY.newInstrumenter(EntitlementChecker.class, checkMethods);
inst.addTransformer(new Transformer(instrumenter, classesToTransform), true);
// TODO: should we limit this array somehow?
var classesToRetransform = classesToTransform.stream().map(EntitlementInitialization::internalNameToClass).toArray(Class[]::new);
inst.retransformClasses(classesToRetransform);
inst.retransformClasses(findClassesToRetransform(inst.getAllLoadedClasses(), classesToTransform));
}

private static Class<?> internalNameToClass(String internalName) {
try {
return Class.forName(internalName.replace('/', '.'), false, ClassLoader.getPlatformClassLoader());
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
private static Class<?>[] findClassesToRetransform(Class<?>[] loadedClasses, Set<String> classesToTransform) {
List<Class<?>> retransform = new ArrayList<>();
for (Class<?> loadedClass : loadedClasses) {
if (classesToTransform.contains(loadedClass.getName().replace(".", "/"))) {
retransform.add(loadedClass);
}
}
return retransform.toArray(new Class<?>[0]);
}

private static PolicyManager createPolicyManager() throws IOException {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.cluster.metadata.DataStreamFailureStoreSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.rest.RestStatus;
import org.junit.Before;

import java.io.IOException;
Expand Down Expand Up @@ -122,13 +125,25 @@ public void testExplicitlyResetDataStreamOptions() throws IOException {
assertOK(client().performRequest(otherRequest));
}

public void testEnableDisableFailureStore() throws IOException {
public void testBehaviorWithEachFailureStoreOptionAndClusterSetting() throws IOException {
{
// Default data stream options
assertAcknowledged(client().performRequest(new Request("DELETE", "/_data_stream/" + DATA_STREAM_NAME + "/_options")));
assertFailureStore(false, 1);
setDataStreamFailureStoreClusterSetting(DATA_STREAM_NAME);
assertDataStreamOptions(null);
assertFailureStoreValuesInGetDataStreamResponse(true, 1);
assertRedirectsDocWithBadMappingToFailureStore();
setDataStreamFailureStoreClusterSetting("does-not-match-failure-data-stream");
assertDataStreamOptions(null);
assertFailureStoreValuesInGetDataStreamResponse(false, 1);
assertFailsDocWithBadMapping();
setDataStreamFailureStoreClusterSetting(null); // should get same behaviour as when we set it to something non-matching
assertDataStreamOptions(null);
assertFailureStoreValuesInGetDataStreamResponse(false, 1);
assertFailsDocWithBadMapping();
}
{
// Data stream options with failure store enabled
Request enableRequest = new Request("PUT", "/_data_stream/" + DATA_STREAM_NAME + "/_options");
enableRequest.setJsonEntity("""
{
Expand All @@ -137,11 +152,21 @@ public void testEnableDisableFailureStore() throws IOException {
}
}""");
assertAcknowledged(client().performRequest(enableRequest));
assertFailureStore(true, 1);
setDataStreamFailureStoreClusterSetting(DATA_STREAM_NAME);
assertDataStreamOptions(true);
assertFailureStoreValuesInGetDataStreamResponse(true, 1);
assertRedirectsDocWithBadMappingToFailureStore();
setDataStreamFailureStoreClusterSetting("does-not-match-failure-data-stream"); // should have no effect as enabled in options
assertDataStreamOptions(true);
assertFailureStoreValuesInGetDataStreamResponse(true, 1);
assertRedirectsDocWithBadMappingToFailureStore();
setDataStreamFailureStoreClusterSetting(null); // same as previous
assertDataStreamOptions(true);
assertFailureStoreValuesInGetDataStreamResponse(true, 1);
assertRedirectsDocWithBadMappingToFailureStore();
}

{
// Data stream options with failure store disabled
Request disableRequest = new Request("PUT", "/_data_stream/" + DATA_STREAM_NAME + "/_options");
disableRequest.setJsonEntity("""
{
Expand All @@ -150,13 +175,23 @@ public void testEnableDisableFailureStore() throws IOException {
}
}""");
assertAcknowledged(client().performRequest(disableRequest));
assertFailureStore(false, 1);
setDataStreamFailureStoreClusterSetting(DATA_STREAM_NAME); // should have no effect as disabled in options
assertDataStreamOptions(false);
assertFailureStoreValuesInGetDataStreamResponse(false, 1);
assertFailsDocWithBadMapping();
setDataStreamFailureStoreClusterSetting("does-not-match-failure-data-stream");
assertDataStreamOptions(false);
assertFailureStoreValuesInGetDataStreamResponse(false, 1);
assertFailsDocWithBadMapping();
setDataStreamFailureStoreClusterSetting(null);
assertDataStreamOptions(false);
assertFailureStoreValuesInGetDataStreamResponse(false, 1);
assertFailsDocWithBadMapping();
}
}

@SuppressWarnings("unchecked")
private void assertFailureStore(boolean failureStoreEnabled, int failureStoreSize) throws IOException {
private void assertFailureStoreValuesInGetDataStreamResponse(boolean failureStoreEnabled, int failureStoreSize) throws IOException {
final Response dataStreamResponse = client().performRequest(new Request("GET", "/_data_stream/" + DATA_STREAM_NAME));
List<Object> dataStreams = (List<Object>) entityAsMap(dataStreamResponse).get("data_streams");
assertThat(dataStreams.size(), is(1));
Expand Down Expand Up @@ -198,4 +233,32 @@ private List<String> getIndices(Map<String, Object> response) {
List<Map<String, String>> indices = (List<Map<String, String>>) response.get("indices");
return indices.stream().map(index -> index.get("index_name")).toList();
}

private static void setDataStreamFailureStoreClusterSetting(String value) throws IOException {
updateClusterSettings(
Settings.builder().put(DataStreamFailureStoreSettings.DATA_STREAM_FAILURE_STORED_ENABLED_SETTING.getKey(), value).build()
);
}

private Response putDocumentWithBadMapping() throws IOException {
Request request = new Request("POST", DATA_STREAM_NAME + "/_doc");
request.setJsonEntity("""
{
"@timestamp": "not a timestamp",
"foo": "bar"
}
""");
return client().performRequest(request);
}

private void assertRedirectsDocWithBadMappingToFailureStore() throws IOException {
Response response = putDocumentWithBadMapping();
String failureStoreResponse = (String) entityAsMap(response).get("failure_store");
assertThat(failureStoreResponse, is("used"));
}

private void assertFailsDocWithBadMapping() {
ResponseException e = assertThrows(ResponseException.class, this::putDocumentWithBadMapping);
assertThat(e.getResponse().getStatusLine().getStatusCode(), is(RestStatus.BAD_REQUEST.getStatus()));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.health.ClusterStateHealth;
import org.elasticsearch.cluster.metadata.DataStream;
import org.elasticsearch.cluster.metadata.DataStreamFailureStoreSettings;
import org.elasticsearch.cluster.metadata.DataStreamGlobalRetentionSettings;
import org.elasticsearch.cluster.metadata.DataStreamLifecycle;
import org.elasticsearch.cluster.metadata.IndexMetadata;
Expand Down Expand Up @@ -64,6 +65,7 @@ public class TransportGetDataStreamsAction extends TransportMasterNodeReadAction
private final SystemIndices systemIndices;
private final ClusterSettings clusterSettings;
private final DataStreamGlobalRetentionSettings globalRetentionSettings;
private final DataStreamFailureStoreSettings dataStreamFailureStoreSettings;
private final Client client;

@Inject
Expand All @@ -75,6 +77,7 @@ public TransportGetDataStreamsAction(
IndexNameExpressionResolver indexNameExpressionResolver,
SystemIndices systemIndices,
DataStreamGlobalRetentionSettings globalRetentionSettings,
DataStreamFailureStoreSettings dataStreamFailureStoreSettings,
Client client
) {
super(
Expand All @@ -91,6 +94,7 @@ public TransportGetDataStreamsAction(
this.systemIndices = systemIndices;
this.globalRetentionSettings = globalRetentionSettings;
clusterSettings = clusterService.getClusterSettings();
this.dataStreamFailureStoreSettings = dataStreamFailureStoreSettings;
this.client = new OriginSettingClient(client, "stack");
}

Expand Down Expand Up @@ -122,6 +126,7 @@ public void onResponse(DataStreamsStatsAction.Response response) {
systemIndices,
clusterSettings,
globalRetentionSettings,
dataStreamFailureStoreSettings,
maxTimestamps
)
);
Expand All @@ -134,7 +139,16 @@ public void onFailure(Exception e) {
});
} else {
listener.onResponse(
innerOperation(state, request, indexNameExpressionResolver, systemIndices, clusterSettings, globalRetentionSettings, null)
innerOperation(
state,
request,
indexNameExpressionResolver,
systemIndices,
clusterSettings,
globalRetentionSettings,
dataStreamFailureStoreSettings,
null
)
);
}
}
Expand All @@ -146,11 +160,16 @@ static GetDataStreamAction.Response innerOperation(
SystemIndices systemIndices,
ClusterSettings clusterSettings,
DataStreamGlobalRetentionSettings globalRetentionSettings,
DataStreamFailureStoreSettings dataStreamFailureStoreSettings,
@Nullable Map<String, Long> maxTimestamps
) {
List<DataStream> dataStreams = getDataStreams(state, indexNameExpressionResolver, request);
List<GetDataStreamAction.Response.DataStreamInfo> dataStreamInfos = new ArrayList<>(dataStreams.size());
for (DataStream dataStream : dataStreams) {
// For this action, we are returning whether the failure store is effectively enabled, either in metadata or by cluster setting.
// Users can use the get data stream options API to find out whether it is explicitly enabled in metadata.
boolean failureStoreEffectivelyEnabled = DataStream.isFailureStoreFeatureFlagEnabled()
&& dataStream.isFailureStoreEffectivelyEnabled(dataStreamFailureStoreSettings);
final String indexTemplate;
boolean indexTemplatePreferIlmValue = true;
String ilmPolicyName = null;
Expand Down Expand Up @@ -254,6 +273,7 @@ public int compareTo(IndexInfo o) {
dataStreamInfos.add(
new GetDataStreamAction.Response.DataStreamInfo(
dataStream,
failureStoreEffectivelyEnabled,
streamHealth.getStatus(),
indexTemplate,
ilmPolicyName,
Expand Down
Loading

0 comments on commit c039023

Please sign in to comment.