Skip to content

Commit

Permalink
Use data frame transform in docs and rename test classes
Browse files Browse the repository at this point in the history
  • Loading branch information
davidkyle committed Mar 12, 2019
1 parent 22c87b7 commit a0d8814
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

package org.elasticsearch.client;

import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.client.core.AcknowledgedResponse;
import org.elasticsearch.client.dataframe.DeleteDataFrameTransformRequest;
import org.elasticsearch.client.dataframe.PutDataFrameTransformRequest;
Expand All @@ -39,8 +40,9 @@
import java.util.Collections;

import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.containsString;

public class DataFrameIT extends ESRestHighLevelClientTestCase {
public class DataFrameTransformIT extends ESRestHighLevelClientTestCase {

private void createIndex(String indexName) throws IOException {

Expand Down Expand Up @@ -87,6 +89,12 @@ public void testCreateDelete() throws IOException {
ack = execute(new DeleteDataFrameTransformRequest(transform.getId()), client::deleteDataFrameTransform,
client::deleteDataFrameTransformAsync);
assertTrue(ack.isAcknowledged());

// The second delete should fail
ElasticsearchStatusException deleteError = expectThrows(ElasticsearchStatusException.class,
() -> execute(new DeleteDataFrameTransformRequest(transform.getId()), client::deleteDataFrameTransform,
client::deleteDataFrameTransformAsync));
assertThat(deleteError.getMessage(), containsString("Transform with id [test-crud] could not be found"));
}
}

Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@

import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;

public class DataFrameDocumentationIT extends ESRestHighLevelClientTestCase {
public class DataFrameTransformDocumentationIT extends ESRestHighLevelClientTestCase {

private void createIndex(String indexName) throws IOException {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ A +{request}+ object requires a non-null `id`.
---------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
---------------------------------------------------
<1> Constructing a new request referencing an existing {dataframe-job}
<1> Constructing a new request referencing an existing {dataframe-transform}

include::../execution.asciidoc[]

Expand Down
18 changes: 9 additions & 9 deletions docs/java-rest/high-level/dataframe/put_data_frame.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="{upid}-{api}"]
=== Put Data Frame Transform API

The Put Data Frame Transform API is used to create a new {dataframe-job}.
The Put Data Frame Transform API is used to create a new {dataframe-transform}.

The API accepts a +{request}+ object as a request and returns a +{response}+.

Expand All @@ -24,14 +24,14 @@ include-tagged::{doc-tests-file}[{api}-request]
[id="{upid}-{api}-config"]
==== Data Frame Transform Configuration

The `DataFrameTransformConfig` object contains all the details about the {dataframe-job}
The `DataFrameTransformConfig` object contains all the details about the {dataframe-transform}
configuration and contains the following arguments:

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-config]
--------------------------------------------------
<1> The data frame transform ID
<1> The {dataframe-transform} ID
<2> The source index or index pattern
<3> The destination index
<4> Optionally a QueryConfig
Expand All @@ -40,7 +40,7 @@ include-tagged::{doc-tests-file}[{api}-config]
[id="{upid}-{api}-query-config"]
==== QueryConfig

The query with which to select data from the source index.
The query with which to select data from the source.
If not set a `match_all` query is used by default.

["source","java",subs="attributes,callouts,macros"]
Expand All @@ -50,7 +50,7 @@ include-tagged::{doc-tests-file}[{api}-query-config]

==== PivotConfig

Defines the pivot transform `group by` fields and the aggregation to reduce the data.
Defines the pivot function `group by` fields and the aggregation to reduce the data.

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
Expand All @@ -59,12 +59,12 @@ include-tagged::{doc-tests-file}[{api}-pivot-config]

===== GroupConfig
The grouping terms. Defines the group by and destination fields
which are produced by the grouping transform. There are 3 types of
which are produced by the pivot function. There are 3 types of
groups

* Terms
* Histogram
* Date Historgram
* Date Histogram

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
Expand All @@ -76,7 +76,7 @@ include-tagged::{doc-tests-file}[{api}-group-config]
===== AggregationConfig

Defines the aggregations for the group fields.
The aggregation must be one of `avg`, `min`, `max` or `sum`.
// TODO link to the supported aggregations

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
Expand All @@ -90,4 +90,4 @@ include::../execution.asciidoc[]
==== Response

The returned +{response}+ acknowledges the successful creation of
the new {dataframe-job} or an error if the configuration is invalid.
the new {dataframe-transform} or an error if the configuration is invalid.

0 comments on commit a0d8814

Please sign in to comment.