Skip to content

Commit

Permalink
Merge branch 'main' into enhancement/bounded_executor_concurrent_search
Browse files Browse the repository at this point in the history
  • Loading branch information
javanna committed Aug 7, 2023
2 parents 026258c + 6e9b649 commit fed2aaf
Show file tree
Hide file tree
Showing 451 changed files with 11,856 additions and 3,067 deletions.
16 changes: 16 additions & 0 deletions .buildkite/pipelines/periodic.trigger.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
steps:
- trigger: elasticsearch-periodic
label: Trigger periodic pipeline for main
async: true
build:
branch: main
- trigger: elasticsearch-periodic
label: Trigger periodic pipeline for 8.9
async: true
build:
branch: "8.9"
- trigger: elasticsearch-periodic
label: Trigger periodic pipeline for 7.17
async: true
build:
branch: "7.17"
16 changes: 7 additions & 9 deletions TRACING.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,15 @@
# Tracing in Elasticsearch

Elasticsearch is instrumented using the [OpenTelemetry][otel] API, which allows
us to gather traces and analyze what Elasticsearch is doing.

ES developers to gather traces and analyze what Elasticsearch is doing.

## How is tracing implemented?

The Elasticsearch server code contains a [`tracing`][tracing] package, which is
The Elasticsearch server code contains a [tracing][tracing] package, which is
an abstraction over the OpenTelemetry API. All locations in the code that
perform instrumentation and tracing must use these abstractions.

Separately, there is the [`apm`](./modules/apm/) module, which works with the
Separately, there is the [apm](./modules/apm) module, which works with the
OpenTelemetry API directly to record trace data. Underneath the OTel API, we
use Elastic's [APM agent for Java][agent], which attaches at runtime to the
Elasticsearch JVM and removes the need for Elasticsearch to hard-code the use of
Expand Down Expand Up @@ -100,7 +99,7 @@ tasks are long-lived and are not suitable candidates for APM tracing.

When a span is started, Elasticsearch tracks information about that span in the
current [thread context][thread-context]. If a new thread context is created,
then the current span information must not propagated but instead renamed, so
then the current span information must not be propagated but instead renamed, so
that (1) it doesn't interfere when new trace information is set in the context,
and (2) the previous trace information is available to establish a parent /
child span relationship. This is done with `ThreadContext#newTraceContext()`.
Expand All @@ -126,7 +125,7 @@ of new trace contexts when child spans need to be created.
That's up to you. Be careful not to capture anything that could leak sensitive
or personal information.

## What is "scope" and when should I used it?
## What is "scope" and when should I use it?

Usually you won't need to.

Expand Down Expand Up @@ -157,9 +156,8 @@ explicitly opening a scope via the `Tracer`.


[otel]: https://opentelemetry.io/
[thread-context]: ./server/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java).
[thread-context]: ./server/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java
[w3c]: https://www.w3.org/TR/trace-context/
[tracing]: ./server/src/main/java/org/elasticsearch/tracing/
[config]: ./modules/apm/src/main/config/elasticapm.properties
[tracing]: ./server/src/main/java/org/elasticsearch/tracing
[agent-config]: https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html
[agent]: https://www.elastic.co/guide/en/apm/agent/java/current/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -16,32 +16,21 @@ coming::[${majorDotMinorDotRevision}]
[[breaking-changes-${majorDotMinor}]]
=== Breaking changes
<% if (breakingByNotabilityByArea.isEmpty()) { %>
// tag::notable-breaking-changes[]
There are no breaking changes in {es} ${majorDotMinor}.
// end::notable-breaking-changes[]
<% } else { %>
The following changes in {es} ${majorDotMinor} might affect your applications
and prevent them from operating normally.
Before upgrading to ${majorDotMinor}, review these changes and take the described steps
to mitigate the impact.

<%
if (breakingByNotabilityByArea.getOrDefault(true, []).isEmpty()) { %>
// tag::notable-breaking-changes[]

There are no notable breaking changes in {es} ${majorDotMinor}.
// end::notable-breaking-changes[]
But there are some less critical breaking changes.
<% }
[true, false].each { isNotable ->
def breakingByArea = breakingByNotabilityByArea.getOrDefault(isNotable, [])
if (breakingByArea.isEmpty() == false) {
if (isNotable) {
/* No newline here, one will be added below */
print "// NOTE: The notable-breaking-changes tagged regions are re-used in the\n"
print "// Installation and Upgrade Guide\n"
print "// tag::notable-breaking-changes[]"
}

breakingByArea.eachWithIndex { area, breakingChanges, i ->
print "\n[discrete]\n"
print "[[breaking_${majorMinor}_${ area.toLowerCase().replaceAll("[^a-z0-9]+", "_") }_changes]]\n"
Expand All @@ -62,9 +51,6 @@ ${breaking.impact.trim()}
}
}

if (isNotable) {
print "// end::notable-breaking-changes[]\n"
}
}
}
}
Expand All @@ -83,16 +69,10 @@ after upgrading to ${majorDotMinor}.

To find out if you are using any deprecated functionality,
enable <<deprecation-logging, deprecation logging>>.

<%
[true, false].each { isNotable ->
def deprecationsByArea = deprecationsByNotabilityByArea.getOrDefault(isNotable, [])
if (deprecationsByArea.isEmpty() == false) {
if (isNotable) {
/* No newline here, one will be added below */
print "// tag::notable-breaking-changes[]"
}

deprecationsByArea.eachWithIndex { area, deprecations, i ->
print "\n[discrete]\n"
print "[[deprecations_${majorMinor}_${ area.toLowerCase().replaceAll("[^a-z0-9]+", "_") }]]\n"
Expand All @@ -113,9 +93,6 @@ ${deprecation.impact.trim()}
}
}

if (isNotable) {
print "// end::notable-breaking-changes[]\n"
}
}
}
} %>
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,6 @@ and prevent them from operating normally.
Before upgrading to 8.4, review these changes and take the described steps
to mitigate the impact.

// NOTE: The notable-breaking-changes tagged regions are re-used in the
// Installation and Upgrade Guide
// tag::notable-breaking-changes[]
[discrete]
[[breaking_84_api_changes]]
==== API changes
Expand Down Expand Up @@ -64,7 +61,6 @@ Breaking change details 4
*Impact* +
Breaking change impact description 4
====
// end::notable-breaking-changes[]

[discrete]
[[breaking_84_transform_changes]]
Expand Down Expand Up @@ -95,7 +91,6 @@ after upgrading to 8.4.
To find out if you are using any deprecated functionality,
enable <<deprecation-logging, deprecation logging>>.

// tag::notable-breaking-changes[]
[discrete]
[[deprecations_84_cluster_and_node_setting]]
==== Cluster and node setting deprecations
Expand All @@ -121,7 +116,6 @@ Deprecation change details 6
*Impact* +
Deprecation change impact description 6
====
// end::notable-breaking-changes[]

[discrete]
[[deprecations_84_cluster_and_node_setting]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1213,12 +1213,12 @@ private void waitForProcessToExit(ProcessHandle processHandle) {
try {
processHandle.onExit().get(ES_DESTROY_TIMEOUT, ES_DESTROY_TIMEOUT_UNIT);
} catch (InterruptedException e) {
LOGGER.info("Interrupted while waiting for ES process", e);
LOGGER.info("[{}] Interrupted while waiting for ES process", name, e);
Thread.currentThread().interrupt();
} catch (ExecutionException e) {
LOGGER.info("Failure while waiting for process to exist", e);
LOGGER.info("[{}] Failure while waiting for process to exist", name, e);
} catch (TimeoutException e) {
LOGGER.info("Timed out waiting for process to exit", e);
LOGGER.info("[{}] Timed out waiting for process to exit", name, e);
}
}

Expand Down
38 changes: 36 additions & 2 deletions catalog-info.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,41 @@ spec:
elasticsearch-team: {}
ml-core: {}
everyone:
access_level: READ_ONLY
access_level: BUILD_AND_READ
provider_settings:
build_branches: false
build_pull_requests: false
publish_commit_status: false
trigger_mode: none
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: buildkite-pipeline-elasticsearch-periodic-trigger
description: Triggers periodic pipeline for all required branches
links:
- title: Pipeline
url: https://buildkite.com/elastic/elasticsearch-periodic-trigger
spec:
type: buildkite-pipeline
system: buildkite
owner: group:elasticsearch-team
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
description: ":elasticsearch: Triggers periodic pipeline for all required branches"
name: elasticsearch / periodic / trigger
spec:
repository: elastic/elasticsearch
pipeline_file: .buildkite/pipelines/periodic.trigger.yml
branch_configuration: main
teams:
elasticsearch-team: {}
ml-core: {}
everyone:
access_level: BUILD_AND_READ
provider_settings:
build_branches: false
build_pull_requests: false
Expand All @@ -102,4 +136,4 @@ spec:
Periodically on main:
branch: main
cronline: "0 0,8,16 * * * America/New_York"
message: "Tests and checks that are run 3x daily"
message: "Triggers pipelines 3x daily"
5 changes: 5 additions & 0 deletions docs/changelog/94132.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 94132
summary: HDFS plugin add replication_factor param
area: Snapshot/Restore
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/97630.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 97630
summary: Add an API for managing the settings of Security system indices
area: Security
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/97992.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 97401
summary: Fix transform incorrectly calculating date bucket on updating old data
area: Transform
type: bug
issues:
- 97101
6 changes: 6 additions & 0 deletions docs/changelog/98067.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 98067
summary: Avoid double get
area: Authentication
type: enhancement
issues:
- 97928
5 changes: 5 additions & 0 deletions docs/changelog/98083.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98083
summary: Collect additional object store stats for S3
area: Snapshot/Restore
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/98113.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98113
summary: Fix APM trace start time
area: Infra/Core
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/98167.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 98167
summary: Fix failure processing Question Answering model output where the input has been spanned over multiple sequences
area: Machine Learning
type: bug
issues:
- 97917
5 changes: 5 additions & 0 deletions docs/changelog/98176.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98176
summary: Enhance regex performance with duplicate wildcards
area: Infra/Core
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/plugins/repository-hdfs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,12 @@ include::repository-shared-settings.asciidoc[]
the pattern with the hostname of the node at runtime (see
link:repository-hdfs-security-runtime[Creating the Secure Repository]).

`replication_factor`::

The replication factor for all new HDFS files created by this repository.
Must be greater or equal to `dfs.replication.min` and less or equal to `dfs.replication.max` HDFS option.
Defaults to using HDFS cluster setting.

[[repository-hdfs-availability]]
[discrete]
===== A note on HDFS availability
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -241,9 +241,10 @@ POST /sales/_search?size=0
--------------------------------------------------
// TEST[setup:sales]
<1> Documents without a value in the `tag` field will fall into the same bucket as documents that have the value `N/A`.
==== Execution Hint

There are different mechanisms by which cardinality aggregations can be executed:
==== Execution hint

You can run cardinality aggregations using different mechanisms:

- by using field values directly (`direct`)
- by using global ordinals of the field and resolving those values after
Expand All @@ -252,13 +253,14 @@ There are different mechanisms by which cardinality aggregations can be executed
segment (`segment_ordinals`)

Additionally, there are two "heuristic based" modes. These modes will cause
Elasticsearch to use some data about the state of the index to choose an
{es} to use some data about the state of the index to choose an
appropriate execution method. The two heuristics are:
- `save_time_heuristic` - this is the default in Elasticsearch 8.4 and later.
- `save_memory_heuristic` - this was the default in Elasticsearch 8.3 and

- `save_time_heuristic` - this is the default in {es} 8.4 and later.
- `save_memory_heuristic` - this was the default in {es} 8.3 and
earlier

When not specified, Elasticsearch will apply a heuristic to chose the
appropriate mode. Also note that some data (i.e. non-ordinal fields), `direct`
When not specified, {es} will apply a heuristic to choose the
appropriate mode. Also note that for some data (non-ordinal fields), `direct`
is the only option, and the hint will be ignored in these cases. Generally
speaking, it should not be necessary to set this value.
5 changes: 2 additions & 3 deletions docs/reference/cluster/nodes-stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -682,12 +682,11 @@ Number of query cache misses.

`cache_size`::
(integer)
Size, in bytes, of the query cache.
Current number of cached queries.

`cache_count`::
(integer)
Count of queries
in the query cache.
Total number of all queries that have been cached.

`evictions`::
(integer)
Expand Down
Loading

0 comments on commit fed2aaf

Please sign in to comment.