Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

feat: implement the installation of the agent #149

Merged
merged 50 commits into from
Jul 17, 2020

Conversation

mdelapenya
Copy link
Contributor

@mdelapenya mdelapenya commented Jun 29, 2020

What does this PR do?

We are adding implementation steps for the first scenario, which installs the agent and enrolls it in Kibana.

We are spinning up a Centos:7 box (which is kept alive using tail -f /dev/null as entrypoint), and we are following the installation process of the Elastic Agent from the guide: https://www.elastic.co/guide/en/ingest-management/7.8/elastic-agent-installation.html:

  1. download the binary for the target platform
  2. untar it
  3. create a symlink to enable "elastic-agent" in the PATH (we did this to simplify invoking the agent from outside the container)

To download the agent, we are inspecting the JSON response of querying Elastic's API for artifacts (https://artifacts-api.elastic.co), extracting the download URL for the current 8.0.0-SNAPSHOT version of the elastic-agent

If the ARTIFACT_HASH environment variable is present, the test suite will use it as part of the URL to download, not inspecting the above JSON, therefore downloading a custom existing version of the agent. This ability is needed to bypass potential issues when working with the latest snapshot.

We enroll the agent with a new token (we added support to create token when deploying a new agent, so it's possible to revoke the token without affecting other test scenarios), and run the agent, which is shown in Kibana as online. After a few seconds, around 20, datastreams will be present in Kibana.

As a side note, we are reusing the "waitForProcess" method to check that the installation commands finished, checking that they are not there when executing "pgrep -n -l PROCESS_NAME" in the agent container

For the stop scenario, we're killing the process with pkill, which comes bundled into the Centos box.

Why is it important?

Making progress with Fleet mode

Related issues

We are spinning up a Centos:7  box, and we are following the installation
process from the guide: https://www.elastic.co/guide/en/ingest-management/7.8/elastic-agent-installation.html

Once there, we had to create a symlink to enable "elastic-agent" in the PATH.

As a side note, we are reusing the "waitForProcess" method to check that
the installation commands finished, checking that they are not there when
executing "ps" in the agent container
We want to be able to check the state of a process (started, stopped),
but we also want to change the state of it (start/stop/restart)
@mdelapenya mdelapenya self-assigned this Jun 29, 2020
@mdelapenya mdelapenya requested review from a team, EricDavisX and michalpristas June 29, 2020 21:36
@apmmachine
Copy link
Contributor

apmmachine commented Jun 29, 2020

💔 Tests Failed

Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: [Pull request #149 updated]

  • Start Time: 2020-07-16T15:23:20.485+0000

  • Duration: 23 min 0 sec

Test stats 🧪

Test Results
Failed 3
Passed 53
Skipped 10
Total 66

Test errors

Expand to view the tests failures

  • Name: Initializing / Tests / Sanity checks / golangcilint – pre_commit.lint

    • Age: 1
    • Duration: 0
    • Error Details: error
  • Name: Initializing / End-To-End Tests / ingest-manager_fleet_mode / Un-enrolling an agent – Fleet Mode Agent

    • Age: 7
    • Duration: 26.798424
    • Error Details: Step the agent is not listed as online in Fleet: The Agent is still online
  • Name: Initializing / End-To-End Tests / ingest-manager_fleet_mode / Re-enrolling an agent – Fleet Mode Agent

    • Age: 7
    • Duration: 26.395763
    • Error Details: Step the agent is listed in Fleet as online: There are 4 online agents. We expected to have exactly one

Steps errors

Expand to view the steps failures

  • Name: Run functional tests for ingest-manager:fleet_mode

    • Description:

    • Duration: 9 min 52 sec

    • Start Time: 2020-07-16T15:34:01.583+0000

    • log

  • Name: Error signal

    • Description:

    • Duration: 0 min 0 sec

    • Start Time: 2020-07-16T15:42:53.544+0000

    • log

  • Name: General Build Step

    • Description: [2020-07-16T15:42:54.213Z] Archiving artifacts
      hudson.AbortException: script returned exit code 1

    • Duration: 0 min 0 sec

    • Start Time: 2020-07-16T15:42:54.209+0000

    • log

Log output

Expand to view the last 100 lines of log output

[2020-07-16T15:42:53.521Z] + exit_status=1
[2020-07-16T15:42:53.521Z] + sed -e 's/^[ \t]*//; s#>.*failed$#>#g' outputs/TEST-ingest-manager-fleet_mode
[2020-07-16T15:42:53.521Z] + grep -E '^<.*>$'
[2020-07-16T15:42:53.521Z] + exit 1
[2020-07-16T15:42:53.557Z] Recording test results
[2020-07-16T15:42:54.213Z] Archiving artifacts
[2020-07-16T15:42:54.334Z] Failed in branch ingest-manager_fleet_mode
[2020-07-16T15:42:57.009Z] time="2020-07-16T15:42:56Z" level=warning msg="Waiting for more hits in the index" currentHits=3 desiredHits=5 elapsedTime=12.036690049s index=metricbeat-7.8.0-mysql-mysql-8.0.13-5c6wl1rw retry=6
[2020-07-16T15:43:00.305Z] time="2020-07-16T15:42:59Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=15.811153473s index=metricbeat-7.8.0-mysql-mysql-8.0.13-5c6wl1rw retry=7
[2020-07-16T15:43:04.509Z] time="2020-07-16T15:43:03Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=19.728680395s index=metricbeat-7.8.0-mysql-mysql-8.0.13-5c6wl1rw retry=8
[2020-07-16T15:43:11.091Z] time="2020-07-16T15:43:10Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=26.179199838s retries=9
[2020-07-16T15:43:11.091Z] time="2020-07-16T15:43:10Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=6.982154ms retries=1
[2020-07-16T15:43:11.091Z] Stopping metricbeat_metricbeat_1 ... 
[2020-07-16T15:43:11.351Z] 
Stopping metricbeat_metricbeat_1 ... done
Removing metricbeat_metricbeat_1 ... 
[2020-07-16T15:43:11.351Z] 
Removing metricbeat_metricbeat_1 ... done
Going to remove metricbeat_metricbeat_1
[2020-07-16T15:43:11.921Z] Stopping metricbeat_mysql_1 ... 
[2020-07-16T15:43:14.459Z] 
Stopping metricbeat_mysql_1 ... done
Removing metricbeat_mysql_1 ... 
[2020-07-16T15:43:14.460Z] 
Removing metricbeat_mysql_1 ... done
Going to remove metricbeat_mysql_1
[2020-07-16T15:43:15.032Z] Pulling mysql (docker.elastic.co/integrations-ci/beats-mysql:percona-5.7.24-1)...
[2020-07-16T15:43:15.603Z] percona-5.7.24-1: Pulling from integrations-ci/beats-mysql
[2020-07-16T15:43:22.227Z] metricbeat_elasticsearch_1 is up-to-date
[2020-07-16T15:43:22.227Z] Creating metricbeat_mysql_1 ... 
[2020-07-16T15:43:46.089Z] 
Creating metricbeat_mysql_1 ... done
Found orphan containers (metricbeat_mysql_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[2020-07-16T15:43:46.089Z] metricbeat_elasticsearch_1 is up-to-date
[2020-07-16T15:43:46.089Z] Creating metricbeat_metricbeat_1 ... 
[2020-07-16T15:43:46.089Z] 
Creating metricbeat_metricbeat_1 ... done
time="2020-07-16T15:43:45Z" level=info msg="Metricbeat is running configured for the service" metricbeatVersion=7.8.0 service=mysql serviceVersion=5.7.24 variant=Percona
[2020-07-16T15:44:08.037Z] time="2020-07-16T15:44:05Z" level=warning msg="Waiting for more hits in the index" currentHits=1 desiredHits=5 elapsedTime=7.894009ms index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=1
[2020-07-16T15:44:08.037Z] time="2020-07-16T15:44:06Z" level=warning msg="Waiting for more hits in the index" currentHits=1 desiredHits=5 elapsedTime=444.348411ms index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=2
[2020-07-16T15:44:08.037Z] time="2020-07-16T15:44:07Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=1.831476521s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=3
[2020-07-16T15:44:09.419Z] time="2020-07-16T15:44:09Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=3.431584851s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=4
[2020-07-16T15:44:15.995Z] time="2020-07-16T15:44:14Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=9.014706811s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=5
[2020-07-16T15:44:17.907Z] time="2020-07-16T15:44:17Z" level=warning msg="Waiting for more hits in the index" currentHits=3 desiredHits=5 elapsedTime=12.008932999s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=6
[2020-07-16T15:44:26.033Z] time="2020-07-16T15:44:25Z" level=warning msg="Waiting for more hits in the index" currentHits=3 desiredHits=5 elapsedTime=19.40028786s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=7
[2020-07-16T15:44:28.572Z] time="2020-07-16T15:44:28Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=22.278625998s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=8
[2020-07-16T15:44:31.864Z] time="2020-07-16T15:44:31Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=25.896363212s index=metricbeat-7.8.0-mysql-percona-5.7.24-rikngnav retry=9
[2020-07-16T15:44:38.439Z] time="2020-07-16T15:44:37Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=31.810619654s retries=10
[2020-07-16T15:44:38.439Z] time="2020-07-16T15:44:37Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=6.221969ms retries=1
[2020-07-16T15:44:38.439Z] Stopping metricbeat_metricbeat_1 ... 
[2020-07-16T15:44:38.698Z] 
Stopping metricbeat_metricbeat_1 ... done
Removing metricbeat_metricbeat_1 ... 
[2020-07-16T15:44:38.698Z] 
Removing metricbeat_metricbeat_1 ... done
Going to remove metricbeat_metricbeat_1
[2020-07-16T15:44:39.267Z] Stopping metricbeat_mysql_1 ... 
[2020-07-16T15:44:41.804Z] 
Stopping metricbeat_mysql_1 ... done
Removing metricbeat_mysql_1 ... 
[2020-07-16T15:44:41.804Z] 
Removing metricbeat_mysql_1 ... done
Going to remove metricbeat_mysql_1
[2020-07-16T15:44:42.375Z] Pulling mysql (docker.elastic.co/integrations-ci/beats-mysql:percona-8.0.13-4-1)...
[2020-07-16T15:44:42.634Z] percona-8.0.13-4-1: Pulling from integrations-ci/beats-mysql
[2020-07-16T15:44:50.771Z] metricbeat_elasticsearch_1 is up-to-date
[2020-07-16T15:44:50.771Z] Creating metricbeat_mysql_1 ... 
[2020-07-16T15:45:14.629Z] 
Creating metricbeat_mysql_1 ... done
Found orphan containers (metricbeat_mysql_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[2020-07-16T15:45:14.629Z] metricbeat_elasticsearch_1 is up-to-date
[2020-07-16T15:45:14.629Z] Creating metricbeat_metricbeat_1 ... 
[2020-07-16T15:45:14.629Z] 
Creating metricbeat_metricbeat_1 ... done
time="2020-07-16T15:45:13Z" level=info msg="Metricbeat is running configured for the service" metricbeatVersion=7.8.0 service=mysql serviceVersion=8.0.13-4 variant=Percona
[2020-07-16T15:45:36.578Z] time="2020-07-16T15:45:33Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=8.759104ms index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=1
[2020-07-16T15:45:36.578Z] time="2020-07-16T15:45:34Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=385.22892ms index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=2
[2020-07-16T15:45:36.578Z] time="2020-07-16T15:45:35Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=1.203249215s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=3
[2020-07-16T15:45:38.490Z] time="2020-07-16T15:45:37Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=4.075750474s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=4
[2020-07-16T15:45:43.771Z] time="2020-07-16T15:45:42Z" level=warning msg="Waiting for more hits in the index" currentHits=2 desiredHits=5 elapsedTime=9.048974399s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=5
[2020-07-16T15:45:50.345Z] time="2020-07-16T15:45:49Z" level=warning msg="Waiting for more hits in the index" currentHits=3 desiredHits=5 elapsedTime=15.56206745s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=6
[2020-07-16T15:45:55.625Z] time="2020-07-16T15:45:55Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=21.719999871s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=7
[2020-07-16T15:45:59.823Z] time="2020-07-16T15:45:59Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=25.141997514s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=8
[2020-07-16T15:46:04.026Z] time="2020-07-16T15:46:03Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=29.790114012s index=metricbeat-7.8.0-mysql-percona-8.0.13-4-kpv2dffc retry=9
[2020-07-16T15:46:12.164Z] time="2020-07-16T15:46:10Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=36.782125341s retries=10
[2020-07-16T15:46:12.164Z] time="2020-07-16T15:46:10Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=5.985452ms retries=1
[2020-07-16T15:46:12.164Z] Stopping metricbeat_metricbeat_1 ... 
[2020-07-16T15:46:12.164Z] 
Stopping metricbeat_metricbeat_1 ... done
Removing metricbeat_metricbeat_1 ... 
[2020-07-16T15:46:12.165Z] 
Removing metricbeat_metricbeat_1 ... done
Going to remove metricbeat_metricbeat_1
[2020-07-16T15:46:13.550Z] Stopping metricbeat_mysql_1 ... 
[2020-07-16T15:46:16.091Z] 
Stopping metricbeat_mysql_1 ... done
Removing metricbeat_mysql_1 ... 
[2020-07-16T15:46:16.091Z] 
Removing metricbeat_mysql_1 ... done
Going to remove metricbeat_mysql_1
[2020-07-16T15:46:16.662Z] Stopping metricbeat_elasticsearch_1 ... 
[2020-07-16T15:46:17.602Z] 
Stopping metricbeat_elasticsearch_1 ... done
Removing metricbeat_elasticsearch_1 ... 
[2020-07-16T15:46:17.602Z] 
Removing metricbeat_elasticsearch_1 ... done
Removing network metricbeat_default
[2020-07-16T15:46:17.602Z] <?xml version="1.0" encoding="UTF-8"?>
[2020-07-16T15:46:17.602Z] <testsuites name="main" tests="7" skipped="0" failures="0" errors="0" time="642.382188211">
[2020-07-16T15:46:17.602Z]   <testsuite name="As a Metricbeat developer I want to check that default configuration works as expected" tests="0" skipped="0" failures="0" errors="0" time="0"></testsuite>
[2020-07-16T15:46:17.602Z]   <testsuite name="As a Metricbeat developer I want to check that the Apache module works as expected" tests="0" skipped="0" failures="0" errors="0" time="0"></testsuite>
[2020-07-16T15:46:17.602Z]   <testsuite name="As a Metricbeat developer I want to check that the MySQL module works as expected" tests="7" skipped="0" failures="0" errors="0" time="597.308109752">
[2020-07-16T15:46:17.602Z]     <testcase name="Check MariaDB-10.2.23 is sending metrics to Elasticsearch without errors" status="passed" time="87.809865103"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check MariaDB-10.3.14 is sending metrics to Elasticsearch without errors" status="passed" time="75.517390234"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check MariaDB-10.4.4 is sending metrics to Elasticsearch without errors" status="passed" time="75.310526423"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check MySQL-5.7.12 is sending metrics to Elasticsearch without errors" status="passed" time="86.904295018"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check MySQL-8.0.13 is sending metrics to Elasticsearch without errors" status="passed" time="78.583657936"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check Percona-5.7.24 is sending metrics to Elasticsearch without errors" status="passed" time="83.259769338"></testcase>
[2020-07-16T15:46:17.602Z]     <testcase name="Check Percona-8.0.13-4 is sending metrics to Elasticsearch without errors" status="passed" time="89.074889214"></testcase>
[2020-07-16T15:46:17.603Z]   </testsuite>
[2020-07-16T15:46:17.603Z]   <testsuite name="As a Metricbeat developer I want to check that the Redis module works as expected" tests="0" skipped="0" failures="0" errors="0" time="0"></testsuite>
[2020-07-16T15:46:17.603Z]   <testsuite name="As a Metricbeat developer I want to check that the vSphere module works as expected" tests="0" skipped="0" failures="0" errors="0" time="0"></testsuite>
[2020-07-16T15:46:17.603Z] </testsuites>+ sed -e 's/^[ \t]*//; s#>.*failed$#>#g' outputs/TEST-metricbeat-mysql
[2020-07-16T15:46:17.603Z] + grep -E '^<.*>$'
[2020-07-16T15:46:17.603Z] + exit 0
[2020-07-16T15:46:17.637Z] Recording test results
[2020-07-16T15:46:18.059Z] Archiving artifacts
[2020-07-16T15:46:19.158Z] Stage "Release" skipped due to earlier failure(s)
[2020-07-16T15:46:19.772Z] Running on worker-854309 in /var/lib/jenkins/workspace/stack_e2e-testing-mbp_PR-149
[2020-07-16T15:46:19.805Z] [INFO] getVaultSecret: Getting secrets
[2020-07-16T15:46:19.868Z] Masking supported pattern matches of $VAULT_ADDR or $VAULT_ROLE_ID or $VAULT_SECRET_ID
[2020-07-16T15:46:21.703Z] + chmod 755 generate-build-data.sh
[2020-07-16T15:46:21.704Z] + ./generate-build-data.sh https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-149/ https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-149/runs/32 FAILURE 1379813
[2020-07-16T15:46:21.704Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-149/runs/32/steps/?limit=10000 -o steps-info.json
[2020-07-16T15:46:25.301Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-149/runs/32/tests/?status=FAILED -o tests-errors.json
[2020-07-16T15:46:26.002Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-149/runs/32/log/ -o pipeline-log.txt

Copy link
Contributor

@cachedout cachedout left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mdelapenya mdelapenya marked this pull request as ready for review July 6, 2020 07:20
ports:
- "5601:5601"
volumes:
- ${kibanaConfigPath}:/usr/share/kibana/config/kibana.yml
package-registry:
image: docker.elastic.co/package-registry/package-registry:master
image: docker.elastic.co/package-registry/package-registry:41c150c8020efc53ab16e3bba774c62a419b51ea
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we get a comment as to how folks would know how and when to update this... I think some details will be here: elastic/package-storage#86

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We replaced master branch because there was a combination of errors:

  • Kibana was unable to start
  • the enrollment process of agents did not work because of different agent versions compared on Kibana

I noticed the team changed the version here: elastic/integrations@25a4bd4, and I shadowed it.

"url": r.URL,
"error": err,
"method": r.method,
"escapedURL": escapedURL,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall discussing this, glad you got it sorted.

@@ -82,7 +83,7 @@ func (sats *StandAloneTestSuite) aStandaloneAgentIsDeployed() error {

func (sats *StandAloneTestSuite) thereIsNewDataInTheIndexFromAgent() error {
maxTimeout := time.Duration(queryRetryTimeout) * time.Minute
minimumHitsCount := 100
minimumHitsCount := 50
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i agree - I might even lower it more... but this is helpful and still keeps high value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It takes 10 seconds to reach 50 hits

@@ -230,7 +237,7 @@ func searchAgentData(hostname string, startDate time.Time, minimumHitsCount int,
},
}

indexName := "logs-agent-default"
indexName := ".ds-logs-elastic.agent-default-000001"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're not supposed to have to use the .ds in order to find things in a search. is this used only to assert that a new index is being accessed? I don't know the Data Streams feature well to be 100% sure of usage just confirming. How is it used? We can ask @michalpristas to look at the usage and help confirm...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to know if there is a way to define an alias. For metricbeat we do this:

"-E", "setup.ilm.rollover_alias=${indexName}",
We define an index for each metricbeat that is run. Maybe it's not doable in the same way, but I'd love to define an index to be used during the entire test suite.

Copy link
Contributor

@EricDavisX EricDavisX left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i love this - lets merge it as soon as we can. we could wait for some product fixes... or not. maybe just push it in so its easier to prep coming down-stream prs. :)

@mdelapenya
Copy link
Contributor Author

The two existing failures are directly related to elastic/beats#20006

I'm gonna merge this to have the Fleet scenarios on CI. Thanks @EricDavisX for your review!

@mdelapenya mdelapenya merged commit 99e9740 into elastic:master Jul 17, 2020
@mdelapenya mdelapenya deleted the 148-fleet-scenarios branch July 17, 2020 10:52
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update fleet setup on Kibana
4 participants