Skip to content

Commit

Permalink
Merge branch '8.x' into bp115518
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored Oct 25, 2024
2 parents 1a60725 + b151c14 commit 9ed5827
Show file tree
Hide file tree
Showing 101 changed files with 2,510 additions and 941 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,6 @@ public enum DockerBase {
// The Iron Bank base image is UBI (albeit hardened), but we are required to parameterize the Docker build
IRON_BANK("${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}", "-ironbank", "yum"),

// Base image with extras for Cloud
CLOUD("ubuntu:20.04", "-cloud", "apt-get"),

// Chainguard based wolfi image with latest jdk
// This is usually updated via renovatebot
// spotless:off
Expand Down
25 changes: 3 additions & 22 deletions distribution/docker/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -288,20 +288,6 @@ void addBuildDockerContextTask(Architecture architecture, DockerBase base) {
}
}

if (base == DockerBase.CLOUD) {
// If we're performing a release build, but `build.id` hasn't been set, we can
// infer that we're not at the Docker building stage of the build, and therefore
// we should skip the beats part of the build.
String buildId = providers.systemProperty('build.id').getOrNull()
boolean includeBeats = VersionProperties.isElasticsearchSnapshot() == true || buildId != null || useDra

if (includeBeats) {
from configurations.getByName("filebeat_${architecture.classifier}")
from configurations.getByName("metricbeat_${architecture.classifier}")
}
// For some reason, the artifact name can differ depending on what repository we used.
rename ~/((?:file|metric)beat)-.*\.tar\.gz$/, "\$1-${VersionProperties.elasticsearch}.tar.gz"
}
Provider<DockerSupportService> serviceProvider = GradleUtils.getBuildService(
project.gradle.sharedServices,
DockerSupportPlugin.DOCKER_SUPPORT_SERVICE_NAME
Expand Down Expand Up @@ -381,7 +367,7 @@ private static List<String> generateTags(DockerBase base, Architecture architect
String image = "elasticsearch${base.suffix}"

String namespace = 'elasticsearch'
if (base == DockerBase.CLOUD || base == DockerBase.CLOUD_ESS) {
if (base == base == DockerBase.CLOUD_ESS) {
namespace += '-ci'
}

Expand Down Expand Up @@ -439,7 +425,7 @@ void addBuildDockerImageTask(Architecture architecture, DockerBase base) {

}

if (base != DockerBase.IRON_BANK && base != DockerBase.CLOUD && base != DockerBase.CLOUD_ESS) {
if (base != DockerBase.IRON_BANK && base != DockerBase.CLOUD_ESS) {
tasks.named("assemble").configure {
dependsOn(buildDockerImageTask)
}
Expand Down Expand Up @@ -548,21 +534,16 @@ subprojects { Project subProject ->
base = DockerBase.IRON_BANK
} else if (subProject.name.contains('cloud-ess-')) {
base = DockerBase.CLOUD_ESS
} else if (subProject.name.contains('cloud-')) {
base = DockerBase.CLOUD
} else if (subProject.name.contains('wolfi-ess')) {
base = DockerBase.WOLFI_ESS
} else if (subProject.name.contains('wolfi-')) {
base = DockerBase.WOLFI
}

final String arch = architecture == Architecture.AARCH64 ? '-aarch64' : ''
final String extension = base == DockerBase.UBI ? 'ubi.tar' :
(base == DockerBase.IRON_BANK ? 'ironbank.tar' :
(base == DockerBase.CLOUD ? 'cloud.tar' :
(base == DockerBase.CLOUD_ESS ? 'cloud-ess.tar' :
(base == DockerBase.WOLFI ? 'wolfi.tar' :
'docker.tar'))))
'docker.tar')))
final String artifactName = "elasticsearch${arch}${base.suffix}_test"

final String exportTaskName = taskName("export", architecture, base, 'DockerImage')
Expand Down
2 changes: 0 additions & 2 deletions distribution/docker/cloud-docker-aarch64-export/build.gradle

This file was deleted.

2 changes: 0 additions & 2 deletions distribution/docker/cloud-docker-export/build.gradle

This file was deleted.

This file was deleted.

2 changes: 0 additions & 2 deletions distribution/docker/wolfi-ess-docker-export/build.gradle

This file was deleted.

19 changes: 19 additions & 0 deletions docs/changelog/113975.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
pr: 113975
summary: JDK locale database change
area: Mapping
type: breaking
issues: []
breaking:
title: JDK locale database change
area: Mapping
details: |
{es} 8.16 changes the version of the JDK that is included from version 22 to version 23. This changes the locale database that is used by Elasticsearch from the COMPAT database to the CLDR database. This change can cause significant differences to the textual date formats accepted by Elasticsearch, and to calculated week-dates.
If you run {es} 8.16 on JDK version 22 or below, it will use the COMPAT locale database to match the behavior of 8.15. However, starting with {es} 9.0, {es} will use the CLDR database regardless of JDK version it is run on.
impact: |
This affects you if you use custom date formats using textual or week-date field specifiers. If you use date fields or calculated week-dates that change between the COMPAT and CLDR databases, then this change will cause Elasticsearch to reject previously valid date fields as invalid data. You might need to modify your ingest or output integration code to account for the differences between these two JDK versions.
Starting in version 8.15.2, Elasticsearch will log deprecation warnings if you are using date format specifiers that might change on upgrading to JDK 23. These warnings are visible in Kibana.
For detailed guidance, refer to <<custom-date-format-locales,Differences in locale information between JDK versions>> and the https://ela.st/jdk-23-locales[Elastic blog].
notable: true
6 changes: 6 additions & 0 deletions docs/changelog/114665.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 114665
summary: Fixing remote ENRICH by pushing the Enrich inside `FragmentExec`
area: ES|QL
type: bug
issues:
- 105095
6 changes: 6 additions & 0 deletions docs/changelog/114990.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 114990
summary: Allow for querries on `_tier` to skip shards in the `can_match` phase
area: Search
type: bug
issues:
- 114910
6 changes: 6 additions & 0 deletions docs/changelog/115117.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 115117
summary: Report JVM stats for all memory pools (97046)
area: Infra/Core
type: bug
issues:
- 97046
29 changes: 29 additions & 0 deletions docs/changelog/115399.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
pr: 115399
summary: Adding breaking change entry for retrievers
area: Search
type: breaking
issues: []
breaking:
title: Reworking RRF retriever to be evaluated during rewrite phase
area: REST API
details: |-
In this release (8.16), we have introduced major changes to the retrievers framework
and how they can be evaluated, focusing mainly on compound retrievers
like `rrf` and `text_similarity_reranker`, which allowed us to support full
composability (i.e. any retriever can be nested under any compound retriever),
as well as supporting additional search features like collapsing, explaining,
aggregations, and highlighting.
To ensure consistency, and given that this rework is not available until 8.16,
`rrf` and `text_similarity_reranker` retriever queries would now
throw an exception in a mixed cluster scenario, where there are nodes
both in current or later (i.e. >= 8.16) and previous ( <= 8.15) versions.
As part of the rework, we have also removed the `_rank` property from
the responses of an `rrf` retriever.
impact: |-
- Users will not be able to use the `rrf` and `text_similarity_reranker` retrievers in a mixed cluster scenario
with previous releases (i.e. prior to 8.16), and the request will throw an `IllegalArgumentException`.
- `_rank` has now been removed from the output of the `rrf` retrievers so trying to directly parse the field
will throw an exception
notable: false
5 changes: 5 additions & 0 deletions docs/changelog/115429.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 115429
summary: "[otel-data] Add more kubernetes aliases"
area: Data streams
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/115430.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 115430
summary: Prevent NPE if model assignment is removed while waiting to start
area: Machine Learning
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/115459.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 115459
summary: Guard blob store local directory creation with `doPrivileged`
area: Infra/Core
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/115594.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 115594
summary: Update `BlobCacheBufferedIndexInput::readVLong` to correctly handle negative
long values
area: Search
type: bug
issues: []
62 changes: 62 additions & 0 deletions docs/reference/images/semantic-options.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/reference/inference/inference-apis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ the following APIs to manage {infer} models and perform {infer}:
* <<get-inference-api>>
* <<post-inference-api>>
* <<put-inference-api>>
* <<stream-inference-api>>
* <<update-inference-api>>

[[inference-landscape]]
Expand Down Expand Up @@ -56,6 +57,7 @@ include::delete-inference.asciidoc[]
include::get-inference.asciidoc[]
include::post-inference.asciidoc[]
include::put-inference.asciidoc[]
include::stream-inference.asciidoc[]
include::update-inference.asciidoc[]
include::service-alibabacloud-ai-search.asciidoc[]
include::service-amazon-bedrock.asciidoc[]
Expand Down
122 changes: 122 additions & 0 deletions docs/reference/inference/stream-inference.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
[role="xpack"]
[[stream-inference-api]]
=== Stream inference API

Streams a chat completion response.

IMPORTANT: The {infer} APIs enable you to use certain services, such as built-in {ml} models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
For built-in models and models uploaded through Eland, the {infer} APIs offer an alternative way to use and manage trained models.
However, if you do not plan to use the {infer} APIs to use these models or if you want to use non-NLP models, use the <<ml-df-trained-models-apis>>.


[discrete]
[[stream-inference-api-request]]
==== {api-request-title}

`POST /_inference/<inference_id>/_stream`

`POST /_inference/<task_type>/<inference_id>/_stream`


[discrete]
[[stream-inference-api-prereqs]]
==== {api-prereq-title}

* Requires the `monitor_inference` <<privileges-list-cluster,cluster privilege>>
(the built-in `inference_admin` and `inference_user` roles grant this privilege)
* You must use a client that supports streaming.


[discrete]
[[stream-inference-api-desc]]
==== {api-description-title}

The stream {infer} API enables real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation.
It only works with the `completion` task type.


[discrete]
[[stream-inference-api-path-params]]
==== {api-path-parms-title}

`<inference_id>`::
(Required, string)
The unique identifier of the {infer} endpoint.


`<task_type>`::
(Optional, string)
The type of {infer} task that the model performs.


[discrete]
[[stream-inference-api-request-body]]
==== {api-request-body-title}

`input`::
(Required, string or array of strings)
The text on which you want to perform the {infer} task.
`input` can be a single string or an array.
+
--
[NOTE]
====
Inference endpoints for the `completion` task type currently only support a
single string as input.
====
--


[discrete]
[[stream-inference-api-example]]
==== {api-examples-title}

The following example performs a completion on the example question with streaming.


[source,console]
------------------------------------------------------------
POST _inference/completion/openai-completion/_stream
{
"input": "What is Elastic?"
}
------------------------------------------------------------
// TEST[skip:TBD]


The API returns the following response:


[source,txt]
------------------------------------------------------------
event: message
data: {
"completion":[{
"delta":"Elastic"
}]
}
event: message
data: {
"completion":[{
"delta":" is"
},
{
"delta":" a"
}
]
}
event: message
data: {
"completion":[{
"delta":" software"
},
{
"delta":" company"
}]
}
(...)
------------------------------------------------------------
// NOTCONSOLE
2 changes: 1 addition & 1 deletion docs/reference/mapping/params/format.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ affected specifiers, you may need to modify your ingest or output integration co
for the differences between these two JDK versions.

[[built-in-date-formats]]
==== Built In Formats
==== Built-in formats

Most of the below formats have a `strict` companion format, which means that
year, month and day parts of the month must use respectively 4, 2 and 2 digits
Expand Down
Loading

0 comments on commit 9ed5827

Please sign in to comment.