Skip to content

Commit

Permalink
Merge remote-tracking branch 'elastic/master' into optimize-warning-h…
Browse files Browse the repository at this point in the history
…ead-de-duplication

* elastic/master: (24 commits)
  [TEST] Mute MlMappingsUpgradeIT testMappingsUpgrade
  Streamline skip_unavailable handling (elastic#37672)
  Only bootstrap and elect node in current voting configuration (elastic#37712)
  Ensure either success or failure path for SearchOperationListener is called (elastic#37467)
  Target only specific index in update settings test
  Add a note how to benchmark Elasticsearch
  Don't use Groovy's `withDefault` (elastic#37726)
  Adapt SyncedFlushService (elastic#37691)
  Mute FilterAggregatorTests#testRandom
  Switch mapping/aggregations over to java time (elastic#36363)
  [ML] Update ML results mappings on process start (elastic#37706)
  Modify removal_of_types.asciidoc (elastic#37648)
  Fix edge case in PutMappingRequestTests (elastic#37665)
  Use new bulk API endpoint in the docs (elastic#37698)
  Expose sequence number and primary terms in search responses (elastic#37639)
  Remove LicenseServiceClusterNotRecoveredTests (elastic#37528)
  Migrate SpecificMasterNodesIT to Zen2 (elastic#37532)
  Fix MetaStateFormat tests
  Use plain text instead of latexmath
  Fix a typo in a warning message in TestFixturesPlugin (elastic#37631)
  ...
  • Loading branch information
jasontedor committed Jan 23, 2019
2 parents 6e1ecec + 6a5d9d9 commit 90cdc65
Show file tree
Hide file tree
Showing 284 changed files with 4,331 additions and 3,453 deletions.
10 changes: 10 additions & 0 deletions TESTING.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -631,3 +631,13 @@ inside `/etc/hosts`, e.g.:
255.255.255.255 broadcasthost
::1 localhost ElasticMBP.local`
....

== Benchmarking

For changes that might affect the performance characteristics of Elasticsearch
you should also run macrobenchmarks. We maintain a macrobenchmarking tool
called https://github.com/elastic/rally[Rally]
which you can use to measure the performance impact. It comes with a set of
default benchmarks that we also
https://elasticsearch-benchmarks.elastic.co/[run every night]. To get started,
please see https://esrally.readthedocs.io/en/stable/[Rally's documentation].
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,3 @@ public TemporalAccessor parseJodaDate() {
return jodaFormatter.parse("1234567890");
}
}

Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ class BuildPlugin implements Plugin<Project> {
static void requireJavaHome(Task task, int version) {
Project rootProject = task.project.rootProject // use root project for global accounting
if (rootProject.hasProperty('requiredJavaVersions') == false) {
rootProject.rootProject.ext.requiredJavaVersions = [:].withDefault{key -> return []}
rootProject.rootProject.ext.requiredJavaVersions = [:]
rootProject.gradle.taskGraph.whenReady { TaskExecutionGraph taskGraph ->
List<String> messages = []
for (entry in rootProject.requiredJavaVersions) {
Expand All @@ -415,7 +415,7 @@ class BuildPlugin implements Plugin<Project> {
throw new GradleException("JAVA${version}_HOME required to run task:\n${task}")
}
} else {
rootProject.requiredJavaVersions.get(version).add(task)
rootProject.requiredJavaVersions.getOrDefault(version, []).add(task)
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ public void apply(Project project) {
if (dockerComposeSupported(project) == false) {
project.getLogger().warn(
"Tests for {} require docker-compose at /usr/local/bin/docker-compose or /usr/bin/docker-compose " +
"but none could not be found so these will be skipped", project.getPath()
"but none could be found so these will be skipped", project.getPath()
);
tasks.withType(getTaskClass("com.carrotsearch.gradle.junit4.RandomizedTestingTask"), task ->
task.setEnabled(false)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1177,7 +1177,7 @@ public void testIndexPutSettings() throws IOException {
createIndex(index, Settings.EMPTY);

assertThat(dynamicSetting.getDefault(Settings.EMPTY), not(dynamicSettingValue));
UpdateSettingsRequest dynamicSettingRequest = new UpdateSettingsRequest();
UpdateSettingsRequest dynamicSettingRequest = new UpdateSettingsRequest(index);
dynamicSettingRequest.settings(Settings.builder().put(dynamicSettingKey, dynamicSettingValue).build());
AcknowledgedResponse response = execute(dynamicSettingRequest, highLevelClient().indices()::putSettings,
highLevelClient().indices()::putSettingsAsync);
Expand All @@ -1187,7 +1187,7 @@ public void testIndexPutSettings() throws IOException {
assertThat(indexSettingsAsMap.get(dynamicSettingKey), equalTo(String.valueOf(dynamicSettingValue)));

assertThat(staticSetting.getDefault(Settings.EMPTY), not(staticSettingValue));
UpdateSettingsRequest staticSettingRequest = new UpdateSettingsRequest();
UpdateSettingsRequest staticSettingRequest = new UpdateSettingsRequest(index);
staticSettingRequest.settings(Settings.builder().put(staticSettingKey, staticSettingValue).build());
ElasticsearchException exception = expectThrows(ElasticsearchException.class, () -> execute(staticSettingRequest,
highLevelClient().indices()::putSettings, highLevelClient().indices()::putSettingsAsync));
Expand All @@ -1207,7 +1207,7 @@ public void testIndexPutSettings() throws IOException {
assertThat(indexSettingsAsMap.get(staticSettingKey), equalTo(staticSettingValue));

assertThat(unmodifiableSetting.getDefault(Settings.EMPTY), not(unmodifiableSettingValue));
UpdateSettingsRequest unmodifiableSettingRequest = new UpdateSettingsRequest();
UpdateSettingsRequest unmodifiableSettingRequest = new UpdateSettingsRequest(index);
unmodifiableSettingRequest.settings(Settings.builder().put(unmodifiableSettingKey, unmodifiableSettingValue).build());
exception = expectThrows(ElasticsearchException.class, () -> execute(unmodifiableSettingRequest,
highLevelClient().indices()::putSettings, highLevelClient().indices()::putSettingsAsync));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,14 @@

package org.elasticsearch.client.indices;

import org.apache.lucene.util.LuceneTestCase.AwaitsFix;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.RandomCreateIndexGenerator;
import org.elasticsearch.test.AbstractXContentTestCase;

import java.io.IOException;
import java.util.Map;

@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/37654")
public class PutMappingRequestTests extends AbstractXContentTestCase<PutMappingRequest> {

@Override
Expand All @@ -47,7 +46,10 @@ protected PutMappingRequest createTestInstance() {
@Override
protected PutMappingRequest doParseInstance(XContentParser parser) throws IOException {
PutMappingRequest request = new PutMappingRequest();
request.source(parser.map());
Map<String, Object> map = parser.map();
if (map.isEmpty() == false) {
request.source(map);
}
return request;
}

Expand All @@ -58,11 +60,16 @@ protected boolean supportsUnknownFields() {

@Override
protected void assertEqualInstances(PutMappingRequest expected, PutMappingRequest actual) {
try (XContentParser expectedJson = createParser(expected.xContentType().xContent(), expected.source());
XContentParser actualJson = createParser(actual.xContentType().xContent(), actual.source())) {
assertEquals(expectedJson.mapOrdered(), actualJson.mapOrdered());
} catch (IOException e) {
throw new RuntimeException(e);
if (actual.source() != null) {
try (XContentParser expectedJson = createParser(expected.xContentType().xContent(), expected.source());
XContentParser actualJson = createParser(actual.xContentType().xContent(), actual.source())) {
assertEquals(expectedJson.mapOrdered(), actualJson.mapOrdered());
} catch (IOException e) {
throw new RuntimeException(e);
}
} else {
// if the original `source` is null, the parsed source should be so too
assertNull(expected.source());
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Example:

[source,js]
--------------------------------------------------
PUT /emails/_doc/_bulk?refresh
PUT /emails/_bulk?refresh
{ "index" : { "_id" : 1 } }
{ "accounts" : ["hillary", "sidney"]}
{ "index" : { "_id" : 2 } }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Example:

[source,js]
--------------------------------------------------
PUT /logs/_doc/_bulk?refresh
PUT /logs/_bulk?refresh
{ "index" : { "_id" : 1 } }
{ "body" : "warning: page could not be rendered" }
{ "index" : { "_id" : 2 } }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ Imagine a situation where you index the following documents into an index with 2

[source,js]
--------------------------------------------------
PUT /transactions/_doc/_bulk?refresh
PUT /transactions/_bulk?refresh
{"index":{"_id":1}}
{"type": "sale","amount": 80}
{"index":{"_id":2}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ The top_hits aggregation returns regular search hits, because of this many per h
* <<search-request-script-fields,Script fields>>
* <<search-request-docvalue-fields,Doc value fields>>
* <<search-request-version,Include versions>>
* <<search-request-seq-no-primary-term,Include Sequence Numbers and Primary Terms>>

==== Example

Expand Down
20 changes: 10 additions & 10 deletions docs/reference/docs/bulk.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ newlines. Example:
[source,js]
--------------------------------------------------
$ cat requests
{ "index" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } }
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_type":"_doc","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}
Expand All @@ -72,12 +72,12 @@ example of a correct sequence of bulk commands:
[source,js]
--------------------------------------------------
POST _bulk
{ "index" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } }
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "2" } }
{ "create" : { "_index" : "test", "_type" : "_doc", "_id" : "3" } }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_type" : "_doc", "_index" : "test"} }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
--------------------------------------------------
// CONSOLE
Expand Down Expand Up @@ -265,15 +265,15 @@ the options. Example with update actions:
[source,js]
--------------------------------------------------
POST _bulk
{ "update" : {"_id" : "1", "_type" : "_doc", "_index" : "index1", "retry_on_conflict" : 3} }
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_type" : "_doc", "_index" : "index1", "retry_on_conflict" : 3} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_type" : "_doc", "_index" : "index1", "retry_on_conflict" : 3} }
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_type" : "_doc", "_index" : "index1", "_source" : true} }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_type" : "_doc", "_index" : "index1"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
--------------------------------------------------
// CONSOLE
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/docs/concurrency-control.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ returns:


Note: The <<search-search,Search API>> can return the `_seq_no` and `_primary_term`
for each search hit by requesting the `_seq_no` and `_primary_term` <<search-request-docvalue-fields,Doc Value Fields>>.
for each search hit by setting <<search-request-seq-no-primary-term,`seq_no_primary_term` parameter>>.

The sequence number and the primary term uniquely identify a change. By noting down
the sequence number and primary term returned, you can make sure to only change the
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -633,7 +633,7 @@ As a quick example, the following call indexes two documents (ID 1 - John Doe an

[source,js]
--------------------------------------------------
POST /customer/_doc/_bulk?pretty
POST /customer/_bulk?pretty
{"index":{"_id":"1"}}
{"name": "John Doe" }
{"index":{"_id":"2"}}
Expand All @@ -645,7 +645,7 @@ This example updates the first document (ID of 1) and then deletes the second do

[source,sh]
--------------------------------------------------
POST /customer/_doc/_bulk?pretty
POST /customer/_bulk?pretty
{"update":{"_id":"1"}}
{"doc": { "name": "John Doe becomes Jane Doe" } }
{"delete":{"_id":"2"}}
Expand Down Expand Up @@ -692,7 +692,7 @@ You can download the sample dataset (accounts.json) from https://github.com/elas

[source,sh]
--------------------------------------------------
curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_doc/_bulk?pretty&refresh" --data-binary "@accounts.json"
curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_bulk?pretty&refresh" --data-binary "@accounts.json"
curl "localhost:9200/_cat/indices?v"
--------------------------------------------------
// NOTCONSOLE
Expand Down
30 changes: 17 additions & 13 deletions docs/reference/mapping/removal_of_types.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ have looked something like this:

[source,js]
----
PUT twitter?include_type_name=true
PUT twitter
{
"mappings": {
"user": {
Expand Down Expand Up @@ -157,16 +157,16 @@ GET twitter/tweet/_search
----
// NOTCONSOLE

You could achieve the same thing by adding a custom `type` field as follows:
You can achieve the same thing by adding a custom `type` field as follows:

[source,js]
----
PUT twitter?include_type_name=true
PUT twitter?include_type_name=true <1>
{
"mappings": {
"_doc": {
"properties": {
"type": { "type": "keyword" }, <1>
"type": { "type": "keyword" }, <2>
"name": { "type": "text" },
"user_name": { "type": "keyword" },
"email": { "type": "keyword" },
Expand Down Expand Up @@ -204,15 +204,17 @@ GET twitter/_search
},
"filter": {
"match": {
"type": "tweet" <1>
"type": "tweet" <2>
}
}
}
}
}
----
// NOTCONSOLE
<1> The explicit `type` field takes the place of the implicit `_type` field.
<1> Use `include_type_name=true` in case need to use the "old" syntax including the "_doc" object like
in this example
<2> The explicit `type` field takes the place of the implicit `_type` field.

[float]
==== Parent/Child without mapping types
Expand Down Expand Up @@ -299,7 +301,7 @@ This first example splits our `twitter` index into a `tweets` index and a

[source,js]
----
PUT users?include_type_name=true
PUT users
{
"settings": {
"index.mapping.single_type": true
Expand All @@ -321,7 +323,7 @@ PUT users?include_type_name=true
}
}
PUT tweets?include_type_name=true
PUT tweets
{
"settings": {
"index.mapping.single_type": true
Expand Down Expand Up @@ -376,7 +378,7 @@ documents of different types which have conflicting IDs:

[source,js]
----
PUT new_twitter?include_type_name=true
PUT new_twitter
{
"mappings": {
"_doc": {
Expand Down Expand Up @@ -427,10 +429,12 @@ POST _reindex
[float]
=== Use `include_type_name=false` to prepare for upgrade to 8.0

Index creation, mappings and document APIs support the `include_type_name`
option. When set to `false`, this option enables the behavior that will become
default in 8.0 when types are removed. See some examples of interactions with
Elasticsearch with this option turned off:
Index creation and mapping APIs support a new `include_type_name` url parameter
starting with version 6.7. It will default to `true` in version 6.7, default to
`false` in version 7.0 and will be removed in version 8.0. When set to `true`,
this parameter enables the pre-7.0 behavior of using type names in the API.

See some examples of interactions with Elasticsearch with this option turned off:

[float]
==== Index creation
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/query-dsl/script-score-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ rewriting equivalent functions of your own, as these functions try
to be the most efficient by using the internal mechanisms.

===== rational
latexmath:[rational(value,k) = value/(k + value)]
`rational(value,k) = value/(k + value)`

[source,js]
--------------------------------------------------
Expand All @@ -64,7 +64,7 @@ latexmath:[rational(value,k) = value/(k + value)]
// NOTCONSOLE

===== sigmoid
latexmath:[sigmoid(value, k, a) = value^a/ (k^a + value^a)]
`sigmoid(value, k, a) = value^a/ (k^a + value^a)`

[source,js]
--------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/search/request-body.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ include::request/preference.asciidoc[]

include::request/explain.asciidoc[]

include::request/version.asciidoc[]
include::request/version-and-seq-no.asciidoc[]

include::request/index-boost.asciidoc[]

Expand Down
1 change: 1 addition & 0 deletions docs/reference/search/request/inner-hits.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ Inner hits also supports the following per document features:
* <<search-request-script-fields,Script fields>>
* <<search-request-docvalue-fields,Doc value fields>>
* <<search-request-version,Include versions>>
* <<search-request-seq-no-primary-term,Include Sequence Numbers and Primary Terms>>

[[nested-inner-hits]]
==== Nested inner hits
Expand Down
Loading

0 comments on commit 90cdc65

Please sign in to comment.