Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump apollo-server-core package version #71393

Closed
wants to merge 1 commit into from

Conversation

jportner
Copy link
Contributor

@jportner jportner commented Jul 12, 2020

Overview:

  • apollo-server-core 1.3.6 -> 2.19.0
  • apollo-server-hapi (removed)

Note, this was being included as a transitive dependency via apollo-server-hapi, but that isn't being used directly anymore, so I got rid of that package entirely. We only need apollo-server-core.

@jportner jportner added chore v8.0.0 release_note:skip Skip the PR/issue when compiling release notes v7.9.0 v6.8.12 labels Jul 12, 2020
@jportner
Copy link
Contributor Author

This PR as-is fails the type checker. Notes:

  • GraphiQL is no longer bundled/exposed, see "Migrating to v2.0" guide; our implementation uses this for generating a body string in non-production mode, it is not immediately apparent what the best path forward is here.
  • The runHttpQuery method's options argument now requires a schemaHash, see commit; since our implementation obtains the schema from a different package (graphql-tools) this may pose a problem.

@jportner
Copy link
Contributor Author

@jasonrhodes @rylnd both of your names come up in the git blame when looking at the apollo-server-core usage. Could one of you take a look at the CI failures and #71393 (comment) and advise what needs to be done to upgrade?

Side note: the functional test failure appears to be unrelated/flaky.

@jasonrhodes
Copy link
Member

@jportner I logged this and scheduled it for 7.10 #73526

@LeeDr
Copy link

LeeDr commented Aug 6, 2020

@jportner do we need this for v6.8.12? If yes, it needs to get in by EOD 2020-08-11

@jportner
Copy link
Contributor Author

jportner commented Aug 6, 2020

@LeeDr no, thanks for pointing it out, I’ll update the tags

@watson
Copy link
Contributor

watson commented Nov 20, 2020

@elasticmachine merge upstream

@kibanamachine
Copy link
Contributor

merge conflict between base and head

@watson
Copy link
Contributor

watson commented Nov 20, 2020

Hmm that merge didn't go well 😅

Note, this was being included as a transitive dependency via
apollo-server-hapi, but that isn't being used directly anymore, so
I got rid of that package entirely. We only need apollo-server-core
@watson watson force-pushed the bump-apollo-server-core branch from 10cb67c to f469057 Compare November 20, 2020 13:10
@watson
Copy link
Contributor

watson commented Nov 20, 2020

Force push FTW 💪

@kibanamachine
Copy link
Contributor

kibanamachine commented Nov 20, 2020

💔 Build Failed

Failed CI Steps


Test Failures

Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/infra/feature_controls/infrastructure_security·ts.InfraOps app feature controls infrastructure security global infrastructure all privileges infrastructure landing page without data shows 'Change source configuration' button

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:26:43]         └-: InfraOps app
[00:26:43]           └-> "before all" hook
[00:26:43]           └-: feature controls
[00:26:43]             └-> "before all" hook
[00:26:43]             └-: infrastructure security
[00:26:43]               └-> "before all" hook
[00:26:43]               └-: global infrastructure all privileges
[00:26:43]                 └-> "before all" hook
[00:26:43]                 └-> "before all" hook
[00:26:43]                   │ debg creating role global_infrastructure_all_role
[00:26:43]                   │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] added role [global_infrastructure_all_role]
[00:26:43]                   │ debg creating user global_infrastructure_all_user
[00:26:43]                   │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] added user [global_infrastructure_all_user]
[00:26:43]                   │ debg created user global_infrastructure_all_user
[00:26:43]                   │ debg SecurityPage.forceLogout
[00:26:43]                   │ debg Find.existsByDisplayedByCssSelector('.login-form') with timeout=100
[00:26:43]                   │ debg --- retry.tryForTime error: .login-form is not displayed
[00:26:44]                   │ debg Redirecting to /logout to force the logout
[00:26:44]                   │ debg Waiting on the login form to appear
[00:26:44]                   │ debg Waiting for Login Page to appear.
[00:26:44]                   │ debg Waiting up to 100000ms for login page...
[00:26:44]                   │ debg browser[INFO] http://localhost:61131/logout?_t=1605881243783 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:26:44]                   │
[00:26:44]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:26:44]                   │ debg Find.existsByDisplayedByCssSelector('.login-form') with timeout=2500
[00:26:46]                   │ debg browser[INFO] http://localhost:61131/login?_t=1605881243783 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:26:46]                   │
[00:26:46]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:26:46]                   │ERROR browser[SEVERE] http://localhost:61131/internal/spaces/_active_space - Failed to load resource: the server responded with a status of 401 (Unauthorized)
[00:26:46]                   │ debg browser[INFO] http://localhost:61131/38241/bundles/core/core.entry.js 12:193817 "Detected an unhandled Promise rejection.
[00:26:46]                   │      Error: Unauthorized"
[00:26:46]                   │ERROR browser[SEVERE] http://localhost:61131/38241/bundles/core/core.entry.js 5:3002 
[00:26:46]                   │ debg navigating to login url: http://localhost:61131/login
[00:26:46]                   │ debg navigate to: http://localhost:61131/login
[00:26:46]                   │ debg ... sleep(700) start
[00:26:46]                   │ debg browser[INFO] http://localhost:61131/login?_t=1605881246627 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:26:46]                   │
[00:26:46]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:26:47]                   │ debg ... sleep(700) end
[00:26:47]                   │ debg returned from get, calling refresh
[00:26:47]                   │ERROR browser[SEVERE] http://localhost:61131/internal/spaces/_active_space - Failed to load resource: the server responded with a status of 401 (Unauthorized)
[00:26:47]                   │ debg browser[INFO] http://localhost:61131/38241/bundles/core/core.entry.js 12:193817 "Detected an unhandled Promise rejection.
[00:26:47]                   │      Error: Unauthorized"
[00:26:47]                   │ERROR browser[SEVERE] http://localhost:61131/38241/bundles/core/core.entry.js 5:3002 
[00:26:47]                   │ERROR browser[SEVERE] http://localhost:61131/38241/bundles/core/core.entry.js 12:192870 TypeError: Failed to fetch
[00:26:47]                   │          at _callee3$ (http://localhost:61131/38241/bundles/core/core.entry.js:6:43940)
[00:26:47]                   │          at l (http://localhost:61131/38241/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751406)
[00:26:47]                   │          at Generator._invoke (http://localhost:61131/38241/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751159)
[00:26:47]                   │          at Generator.forEach.e.<computed> [as throw] (http://localhost:61131/38241/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751763)
[00:26:47]                   │          at fetch_asyncGeneratorStep (http://localhost:61131/38241/bundles/core/core.entry.js:6:38998)
[00:26:47]                   │          at _throw (http://localhost:61131/38241/bundles/core/core.entry.js:6:39406)
[00:26:47]                   │ debg browser[INFO] http://localhost:61131/login?_t=1605881246627 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:26:47]                   │
[00:26:47]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:26:47]                   │ debg currentUrl = http://localhost:61131/login
[00:26:47]                   │          appUrl = http://localhost:61131/login
[00:26:47]                   │ debg TestSubjects.find(kibanaChrome)
[00:26:47]                   │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:26:48]                   │ERROR browser[SEVERE] http://localhost:61131/internal/spaces/_active_space - Failed to load resource: the server responded with a status of 401 (Unauthorized)
[00:26:48]                   │ debg browser[INFO] http://localhost:61131/38241/bundles/core/core.entry.js 12:193817 "Detected an unhandled Promise rejection.
[00:26:48]                   │      Error: Unauthorized"
[00:26:48]                   │ERROR browser[SEVERE] http://localhost:61131/38241/bundles/core/core.entry.js 5:3002 
[00:26:48]                   │ debg ... sleep(501) start
[00:26:49]                   │ debg ... sleep(501) end
[00:26:49]                   │ debg in navigateTo url = http://localhost:61131/login
[00:26:49]                   │ debg TestSubjects.exists(statusPageContainer)
[00:26:49]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="statusPageContainer"]') with timeout=2500
[00:26:51]                   │ debg --- retry.tryForTime error: [data-test-subj="statusPageContainer"] is not displayed
[00:26:52]                   │ debg Waiting for Login Form to appear.
[00:26:52]                   │ debg Waiting up to 100000ms for login form...
[00:26:52]                   │ debg TestSubjects.exists(loginForm)
[00:26:52]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="loginForm"]') with timeout=2500
[00:26:52]                   │ debg TestSubjects.setValue(loginUsername, global_infrastructure_all_user)
[00:26:52]                   │ debg TestSubjects.click(loginUsername)
[00:26:52]                   │ debg Find.clickByCssSelector('[data-test-subj="loginUsername"]') with timeout=10000
[00:26:52]                   │ debg Find.findByCssSelector('[data-test-subj="loginUsername"]') with timeout=10000
[00:26:52]                   │ debg TestSubjects.setValue(loginPassword, global_infrastructure_all_user-password)
[00:26:52]                   │ debg TestSubjects.click(loginPassword)
[00:26:52]                   │ debg Find.clickByCssSelector('[data-test-subj="loginPassword"]') with timeout=10000
[00:26:52]                   │ debg Find.findByCssSelector('[data-test-subj="loginPassword"]') with timeout=10000
[00:26:52]                   │ debg TestSubjects.click(loginSubmit)
[00:26:52]                   │ debg Find.clickByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:26:52]                   │ debg Find.findByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:26:52]                   │ debg Waiting for login result, expected: undefined.
[00:26:52]                   │ debg Waiting up to 20000ms for logout button visible...
[00:26:52]                   │ debg TestSubjects.exists(userMenuButton)
[00:26:52]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenuButton"]') with timeout=2500
[00:26:52]                   │ proc [kibana]   log   [14:07:32.606] [info][plugins][routes][security] Logging in with provider "basic" (basic)
[00:26:54]                   │ debg browser[INFO] http://localhost:61131/app/home 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:26:54]                   │
[00:26:54]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:26:55]                   │ debg TestSubjects.exists(userMenu)
[00:26:55]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"]') with timeout=2500
[00:26:57]                   │ debg --- retry.tryForTime error: [data-test-subj="userMenu"] is not displayed
[00:26:58]                   │ debg TestSubjects.click(userMenuButton)
[00:26:58]                   │ debg Find.clickByCssSelector('[data-test-subj="userMenuButton"]') with timeout=10000
[00:26:58]                   │ debg Find.findByCssSelector('[data-test-subj="userMenuButton"]') with timeout=10000
[00:26:58]                   │ debg TestSubjects.exists(userMenu)
[00:26:58]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"]') with timeout=120000
[00:26:58]                   │ debg TestSubjects.exists(userMenu > logoutLink)
[00:26:58]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"] [data-test-subj="logoutLink"]') with timeout=2500
[00:26:58]                 └-> shows metrics navlink
[00:26:58]                   └-> "before each" hook: global before each
[00:26:58]                   │ debg isGlobalLoadingIndicatorVisible
[00:26:58]                   │ debg TestSubjects.exists(globalLoadingIndicator)
[00:26:58]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="globalLoadingIndicator"]') with timeout=1500
[00:26:59]                   │ debg --- retry.tryForTime error: [data-test-subj="globalLoadingIndicator"] is not displayed
[00:27:00]                   │ debg TestSubjects.exists(globalLoadingIndicator-hidden)
[00:27:00]                   │ debg Find.existsByCssSelector('[data-test-subj="globalLoadingIndicator-hidden"]') with timeout=100000
[00:27:00]                   │ debg TestSubjects.exists(collapsibleNav)
[00:27:00]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="collapsibleNav"]') with timeout=2500
[00:27:02]                   │ debg --- retry.tryForTime error: [data-test-subj="collapsibleNav"] is not displayed
[00:27:03]                   │ debg TestSubjects.click(toggleNavButton)
[00:27:03]                   │ debg Find.clickByCssSelector('[data-test-subj="toggleNavButton"]') with timeout=10000
[00:27:03]                   │ debg Find.findByCssSelector('[data-test-subj="toggleNavButton"]') with timeout=10000
[00:27:03]                   │ debg TestSubjects.find(collapsibleNav)
[00:27:03]                   │ debg Find.findByCssSelector('[data-test-subj="collapsibleNav"]') with timeout=10000
[00:27:03]                   │ debg Find.existsByCssSelector('[data-test-subj=collapsibleNav] > button') with timeout=2500
[00:27:03]                   │ debg Find.findByCssSelector('[data-test-subj=collapsibleNav] > button') with timeout=10000
[00:27:03]                   │ debg Find.clickByCssSelector('[data-test-subj=collapsibleNav] > button') with timeout=10000
[00:27:03]                   │ debg Find.findByCssSelector('[data-test-subj=collapsibleNav] > button') with timeout=10000
[00:27:03]                   └- ✓ pass  (5.4s) "InfraOps app feature controls infrastructure security global infrastructure all privileges shows metrics navlink"
[00:27:03]                 └-> metrics page is visible
[00:27:03]                   └-> "before each" hook: global before each
[00:27:03]                   │ debg navigateToActualUrl http://localhost:61131/app/metrics/detail/host/demo-stack-redis-01
[00:27:03]                   │ debg browser[INFO] http://localhost:61131/app/metrics/detail/host/demo-stack-redis-01?_t=1605881263399 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:27:03]                   │
[00:27:03]                   │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:27:03]                   │ debg TestSubjects.exists(~infraMetricsPage)
[00:27:03]                   │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="infraMetricsPage"]') with timeout=120000
[00:27:05]                   └- ✓ pass  (1.5s) "InfraOps app feature controls infrastructure security global infrastructure all privileges metrics page is visible"
[00:27:05]                 └-: infrastructure landing page without data
[00:27:05]                   └-> "before all" hook
[00:27:05]                   └-> shows 'Change source configuration' button
[00:27:05]                     └-> "before each" hook: global before each
[00:27:05]                     │ debg navigateToActualUrl http://localhost:61131/app/metrics
[00:27:05]                     │ proc [kibana]  error  [14:07:44.882]  Error: Internal Server Error
[00:27:05]                     │ proc [kibana]     at HapiResponseAdapter.toError (/dev/shm/workspace/kibana-build-xpack-13/src/core/server/http/router/response_adapter.js:132:19)
[00:27:05]                     │ proc [kibana]     at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/kibana-build-xpack-13/src/core/server/http/router/response_adapter.js:86:19)
[00:27:05]                     │ proc [kibana]     at HapiResponseAdapter.handle (/dev/shm/workspace/kibana-build-xpack-13/src/core/server/http/router/response_adapter.js:81:17)
[00:27:05]                     │ proc [kibana]     at Router.handle (/dev/shm/workspace/kibana-build-xpack-13/src/core/server/http/router/router.js:164:34)
[00:27:05]                     │ proc [kibana]     at runMicrotasks (<anonymous>)
[00:27:05]                     │ proc [kibana]     at processTicksAndRejections (internal/process/task_queues.js:97:5)
[00:27:05]                     │ proc [kibana]     at async handler (/dev/shm/workspace/kibana-build-xpack-13/src/core/server/http/router/router.js:124:50)
[00:27:05]                     │ proc [kibana]     at async module.exports.internals.Manager.execute (/dev/shm/workspace/kibana-build-xpack-13/node_modules/@hapi/hapi/lib/toolkit.js:45:28)
[00:27:05]                     │ proc [kibana]     at async Object.internals.handler (/dev/shm/workspace/kibana-build-xpack-13/node_modules/@hapi/hapi/lib/handler.js:46:20)
[00:27:05]                     │ proc [kibana]     at async exports.execute (/dev/shm/workspace/kibana-build-xpack-13/node_modules/@hapi/hapi/lib/handler.js:31:20)
[00:27:05]                     │ proc [kibana]     at async Request._lifecycle (/dev/shm/workspace/kibana-build-xpack-13/node_modules/@hapi/hapi/lib/request.js:312:32)
[00:27:05]                     │ proc [kibana]     at async Request._execute (/dev/shm/workspace/kibana-build-xpack-13/node_modules/@hapi/hapi/lib/request.js:221:9)
[00:27:05]                     │ERROR browser[SEVERE] http://localhost:61131/api/metrics/node_details - Failed to load resource: the server responded with a status of 500 (Internal Server Error)
[00:27:05]                     │ debg browser[INFO] http://localhost:61131/app/metrics?_t=1605881264925 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:27:05]                     │
[00:27:05]                     │ debg browser[INFO] http://localhost:61131/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:27:05]                     │ debg TestSubjects.exists(~infrastructureViewSetupInstructionsButton)
[00:27:05]                     │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="infrastructureViewSetupInstructionsButton"]') with timeout=120000
[00:27:08]                     │ debg --- retry.tryForTime error: [data-test-subj~="infrastructureViewSetupInstructionsButton"] is not displayed
[00:27:11]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:14]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:17]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:20]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:23]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:26]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:29]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:32]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:35]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:38]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:41]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:44]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:47]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:50]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:53]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:56]                     │ debg --- retry.tryForTime failed again with the same message...
[00:27:59]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:02]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:05]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:08]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:11]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:14]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:17]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:20]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:24]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:27]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:30]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:33]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:36]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:39]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:42]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:45]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:48]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:51]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:54]                     │ debg --- retry.tryForTime failed again with the same message...
[00:28:57]                     │ debg --- retry.tryForTime failed again with the same message...
[00:29:00]                     │ debg --- retry.tryForTime failed again with the same message...
[00:29:03]                     │ debg --- retry.tryForTime failed again with the same message...
[00:29:06]                     │ debg --- retry.tryForTime failed again with the same message...
[00:29:06]                     │ info Taking screenshot "/dev/shm/workspace/parallel/13/kibana/x-pack/test/functional/screenshots/failure/InfraOps app feature controls infrastructure security global infrastructure all privileges infrastructure landing page without data shows _Change source configuration_ button.png"
[00:29:07]                     │ info Current URL is: http://localhost:61131/app/metrics/inventory?waffleFilter=(expression%3A''%2Ckind%3Akuery)&waffleTime=(currentTime%3A1605881265873%2CisAutoReloading%3A!f)&waffleOptions=(accountId%3A''%2CautoBounds%3A!t%2CboundsOverride%3A(max%3A1%2Cmin%3A0)%2CcustomMetrics%3A!()%2CcustomOptions%3A!()%2CgroupBy%3A!()%2Clegend%3A(palette%3Acool%2CreverseColors%3A!f%2Csteps%3A10)%2Cmetric%3A(type%3Acpu)%2CnodeType%3Ahost%2Cregion%3A''%2Csort%3A(by%3Aname%2Cdirection%3Adesc)%2Csource%3Adefault%2Cview%3Amap)
[00:29:07]                     │ info Saving page source to: /dev/shm/workspace/parallel/13/kibana/x-pack/test/functional/failure_debug/html/InfraOps app feature controls infrastructure security global infrastructure all privileges infrastructure landing page without data shows _Change source configuration_ button.html
[00:29:07]                     └- ✖ fail: InfraOps app feature controls infrastructure security global infrastructure all privileges infrastructure landing page without data shows 'Change source configuration' button
[00:29:07]                     │      Error: expected testSubject(~infrastructureViewSetupInstructionsButton) to exist
[00:29:07]                     │       at TestSubjects.existOrFail (/dev/shm/workspace/parallel/13/kibana/test/functional/services/common/test_subjects.ts:62:15)
[00:29:07]                     │       at Context.<anonymous> (test/functional/apps/infra/feature_controls/infrastructure_security.ts:72:11)
[00:29:07]                     │       at Object.apply (/dev/shm/workspace/parallel/13/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:29:07]                     │ 
[00:29:07]                     │ 

Stack Trace

Error: expected testSubject(~infrastructureViewSetupInstructionsButton) to exist
    at TestSubjects.existOrFail (/dev/shm/workspace/parallel/13/kibana/test/functional/services/common/test_subjects.ts:62:15)
    at Context.<anonymous> (test/functional/apps/infra/feature_controls/infrastructure_security.ts:72:11)
    at Object.apply (/dev/shm/workspace/parallel/13/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)

X-Pack API Integration Tests.x-pack/test/api_integration/apis/metrics_ui/log_entries·ts.apis MetricsUI Endpoints log entry apis /log_entries/entries with a configured source "before all" hook for "returns the configured columns"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://dryrun

[00:00:00]       │
[00:00:00]         │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.ds-ilm-history-5-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[00:00:00]         │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-000001] and backing indices []
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] moving index [.ds-ilm-history-5-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-000001][0]]])." previous.health="YELLOW" reason="shards started [[.ds-ilm-history-5-000001][0]]"
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] moving index [.ds-ilm-history-5-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ilm-history-ilm-policy]
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:01:32]           └-: MetricsUI Endpoints
[00:01:32]             └-> "before all" hook
[00:01:58]             └-: log entry apis
[00:01:58]               └-> "before all" hook
[00:01:58]               └-> "before all" hook
[00:01:58]                 │ info [infra/metrics_and_logs] Loading "mappings.json"
[00:01:58]                 │ info [infra/metrics_and_logs] Loading "data.json.gz"
[00:01:58]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [metricbeat-7.0.0-alpha1-2018.10.17] creating index, cause [api], templates [], shards [1]/[0]
[00:01:58]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metricbeat-7.0.0-alpha1-2018.10.17][0]]])." previous.health="YELLOW" reason="shards started [[metricbeat-7.0.0-alpha1-2018.10.17][0]]"
[00:01:58]                 │ info [infra/metrics_and_logs] Created index "metricbeat-7.0.0-alpha1-2018.10.17"
[00:01:58]                 │ debg [infra/metrics_and_logs] "metricbeat-7.0.0-alpha1-2018.10.17" settings {"index":{"codec":"best_compression","mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"0","number_of_shards":"1","query":{"default_field":["beat.name","beat.hostname","beat.timezone","beat.version","tags","error.message","error.type","meta.cloud.provider","meta.cloud.instance_id","meta.cloud.instance_name","meta.cloud.machine_type","meta.cloud.availability_zone","meta.cloud.project_id","meta.cloud.region","docker.container.id","docker.container.image","docker.container.name","host.name","host.id","host.architecture","host.os.platform","host.os.version","host.os.family","host.mac","kubernetes.pod.name","kubernetes.pod.uid","kubernetes.namespace","kubernetes.node.name","kubernetes.container.name","kubernetes.container.image","metricset.module","metricset.name","metricset.host","metricset.namespace","type","service.name","aerospike.namespace.name","aerospike.namespace.node.host","aerospike.namespace.node.name","apache.status.hostname","ceph.cluster_health.overall_status","ceph.cluster_health.timechecks.round.status","ceph.monitor_health.health","ceph.monitor_health.name","ceph.osd_df.name","ceph.osd_df.device_class","ceph.osd_tree.name","ceph.osd_tree.type","ceph.osd_tree.children","ceph.osd_tree.status","ceph.osd_tree.device_class","ceph.osd_tree.father","ceph.pool_disk.name","couchbase.bucket.name","couchbase.bucket.type","couchbase.node.hostname","docker.container.command","docker.container.status","docker.container.ip_addresses","docker.healthcheck.status","docker.healthcheck.event.output","docker.image.id.current","docker.image.id.parent","docker.info.id","docker.network.interface","elasticsearch.cluster.name","elasticsearch.cluster.id","elasticsearch.cluster.state.id","elasticsearch.index.name","elasticsearch.node.name","elasticsearch.node.version","elasticsearch.node.jvm.version","elasticsearch.cluster.pending_task.source","elasticsearch.shard.state","etcd.leader.leader","etcd.self.id","etcd.self.leaderinfo.leader","etcd.self.leaderinfo.starttime","etcd.self.leaderinfo.uptime","etcd.self.name","etcd.self.starttime","etcd.self.state","golang.expvar.cmdline","golang.heap.cmdline","graphite.server.example","haproxy.stat.status","haproxy.stat.service_name","haproxy.stat.check.status","haproxy.stat.check.health.last","haproxy.stat.proxy.name","http.request.method","http.request.body","http.response.code","http.response.phrase","http.response.body","kafka.consumergroup.broker.address","kafka.consumergroup.id","kafka.consumergroup.topic","kafka.consumergroup.meta","kafka.consumergroup.client.id","kafka.consumergroup.client.host","kafka.consumergroup.client.member_id","kafka.partition.topic.name","kafka.partition.broker.address","kibana.stats.cluster_uuid","kibana.stats.name","kibana.stats.uuid","kibana.stats.version.number","kibana.stats.status.overall.state","kibana.status.name","kibana.status.uuid","kibana.status.version.number","kibana.status.status.overall.state","kubernetes.apiserver.request.client","kubernetes.apiserver.request.resource","kubernetes.apiserver.request.subresource","kubernetes.apiserver.request.scope","kubernetes.apiserver.request.verb","kubernetes.event.message","kubernetes.event.reason","kubernetes.event.type","kubernetes.event.metadata.name","kubernetes.event.metadata.namespace","kubernetes.event.metadata.resource_version","kubernetes.event.metadata.uid","kubernetes.event.metadata.self_link","kubernetes.event.involved_object.api_version","kubernetes.event.involved_object.kind","kubernetes.event.involved_object.name","kubernetes.event.involved_object.resource_version","kubernetes.event.involved_object.uid","kubernetes.container.id","kubernetes.container.status.phase","kubernetes.container.status.reason","kubernetes.deployment.name","kubernetes.node.status.ready","kubernetes.pod.status.phase","kubernetes.pod.status.ready","kubernetes.pod.status.scheduled","kubernetes.replicaset.name","kubernetes.statefulset.name","kubernetes.system.container","kubernetes.volume.name","kvm.dommemstat.stat.name","kvm.dommemstat.name","logstash.node.host","logstash.node.version","logstash.node.jvm.version","mongodb.collstats.db","mongodb.collstats.collection","mongodb.collstats.name","mongodb.dbstats.db","mongodb.status.version","mongodb.status.storage_engine.name","mysql.galera_status.cluster.status","mysql.galera_status.connected","mysql.galera_status.evs.evict","mysql.galera_status.evs.state","mysql.galera_status.local.state","mysql.galera_status.ready","nginx.stubstatus.hostname","php_fpm.pool.name","php_fpm.pool.process_manager","postgresql.activity.database.name","postgresql.activity.user.name","postgresql.activity.application_name","postgresql.activity.client.address","postgresql.activity.client.hostname","postgresql.activity.state","postgresql.activity.query","postgresql.database.name","postgresql.statement.query.text","rabbitmq.connection.name","rabbitmq.connection.vhost","rabbitmq.connection.user","rabbitmq.connection.node","rabbitmq.connection.type","rabbitmq.connection.host","rabbitmq.connection.peer.host","rabbitmq.exchange.name","rabbitmq.exchange.vhost","rabbitmq.exchange.user","rabbitmq.node.name","rabbitmq.node.type","rabbitmq.queue.name","rabbitmq.queue.vhost","rabbitmq.queue.node","rabbitmq.queue.state","redis.info.memory.max.policy","redis.info.memory.allocator","redis.info.persistence.rdb.bgsave.last_status","redis.info.persistence.aof.bgrewrite.last_status","redis.info.persistence.aof.write.last_status","redis.info.replication.role","redis.info.server.version","redis.info.server.git_sha1","redis.info.server.git_dirty","redis.info.server.build_id","redis.info.server.mode","redis.info.server.os","redis.info.server.arch_bits","redis.info.server.multiplexing_api","redis.info.server.gcc_version","redis.info.server.run_id","redis.info.server.config_file","redis.keyspace.id","system.diskio.name","system.diskio.serial_number","system.filesystem.device_name","system.filesystem.type","system.filesystem.mount_point","system.network.name","system.process.name","system.process.state","system.process.cmdline","system.process.username","system.process.cwd","system.process.cgroup.id","system.process.cgroup.path","system.process.cgroup.cpu.id","system.process.cgroup.cpu.path","system.process.cgroup.cpuacct.id","system.process.cgroup.cpuacct.path","system.process.cgroup.memory.id","system.process.cgroup.memory.path","system.process.cgroup.blkio.id","system.process.cgroup.blkio.path","system.raid.name","system.raid.activity_state","system.socket.direction","system.socket.family","system.socket.remote.host","system.socket.remote.etld_plus_one","system.socket.remote.host_error","system.socket.process.command","system.socket.process.cmdline","system.socket.process.exe","system.socket.user.name","uwsgi.status.worker.status","uwsgi.status.worker.rss","vsphere.datastore.name","vsphere.datastore.fstype","vsphere.host.name","vsphere.host.network_names","vsphere.virtualmachine.host","vsphere.virtualmachine.name","vsphere.virtualmachine.network_names","windows.service.id","windows.service.name","windows.service.display_name","windows.service.start_type","windows.service.state","windows.service.exit_code","zookeeper.mntr.hostname","zookeeper.mntr.server_state","zookeeper.mntr.version","fields.*"]},"refresh_interval":"5s"}}
[00:01:58]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [filebeat-7.0.0-alpha1-2018.10.17] creating index, cause [api], templates [], shards [1]/[0]
[00:01:58]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[filebeat-7.0.0-alpha1-2018.10.17][0]]])." previous.health="YELLOW" reason="shards started [[filebeat-7.0.0-alpha1-2018.10.17][0]]"
[00:01:58]                 │ info [infra/metrics_and_logs] Created index "filebeat-7.0.0-alpha1-2018.10.17"
[00:01:58]                 │ debg [infra/metrics_and_logs] "filebeat-7.0.0-alpha1-2018.10.17" settings {"index":{"codec":"best_compression","mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"0","number_of_shards":"1","query":{"default_field":["beat.name","beat.hostname","beat.timezone","beat.version","tags","error.message","error.type","meta.cloud.provider","meta.cloud.instance_id","meta.cloud.instance_name","meta.cloud.machine_type","meta.cloud.availability_zone","meta.cloud.project_id","meta.cloud.region","docker.container.id","docker.container.image","docker.container.name","host.name","host.id","host.architecture","host.os.platform","host.os.version","host.os.family","host.mac","kubernetes.pod.name","kubernetes.pod.uid","kubernetes.namespace","kubernetes.node.name","kubernetes.container.name","kubernetes.container.image","source","message","stream","prospector.type","input.type","read_timestamp","fileset.module","fileset.name","syslog.severity_label","syslog.facility_label","process.program","service.name","log.level","apache2.access.remote_ip","apache2.access.user_name","apache2.access.method","apache2.access.url","apache2.access.http_version","apache2.access.referrer","apache2.access.agent","apache2.access.user_agent.device","apache2.access.user_agent.patch","apache2.access.user_agent.name","apache2.access.user_agent.os","apache2.access.user_agent.os_name","apache2.access.geoip.continent_name","apache2.access.geoip.country_iso_code","apache2.access.geoip.region_name","apache2.access.geoip.city_name","apache2.error.level","apache2.error.client","apache2.error.message","apache2.error.module","auditd.log.record_type","auditd.log.old_auid","auditd.log.new_auid","auditd.log.old_ses","auditd.log.new_ses","auditd.log.acct","auditd.log.pid","auditd.log.ppid","auditd.log.items","auditd.log.item","auditd.log.a0","auditd.log.res","auditd.log.geoip.continent_name","auditd.log.geoip.city_name","auditd.log.geoip.region_name","auditd.log.geoip.country_iso_code","elasticsearch.audit.node_name","elasticsearch.audit.layer","elasticsearch.audit.event_type","elasticsearch.audit.origin_type","elasticsearch.audit.principal","elasticsearch.audit.action","elasticsearch.audit.uri","elasticsearch.audit.request","elasticsearch.audit.request_body","elasticsearch.gc.tags","elasticsearch.server.component","elasticsearch.slowlog.loglevel","elasticsearch.slowlog.logger","elasticsearch.slowlog.node_name","elasticsearch.slowlog.index_name","elasticsearch.slowlog.shard_id","elasticsearch.slowlog.took","elasticsearch.slowlog.types","elasticsearch.slowlog.stats","elasticsearch.slowlog.search_type","elasticsearch.slowlog.source_query","elasticsearch.slowlog.extra_source","elasticsearch.slowlog.took_millis","elasticsearch.slowlog.total_hits","elasticsearch.slowlog.total_shards","icinga.debug.facility","icinga.debug.severity","icinga.debug.message","icinga.main.facility","icinga.main.severity","icinga.main.message","icinga.startup.facility","icinga.startup.severity","icinga.startup.message","iis.access.server_ip","iis.access.method","iis.access.url","iis.access.query_string","iis.access.user_name","iis.access.remote_ip","iis.access.referrer","iis.access.site_name","iis.access.server_name","iis.access.http_version","iis.access.cookie","iis.access.hostname","iis.access.agent","iis.access.user_agent.device","iis.access.user_agent.patch","iis.access.user_agent.name","iis.access.user_agent.os","iis.access.user_agent.os_name","iis.access.geoip.continent_name","iis.access.geoip.country_iso_code","iis.access.geoip.region_name","iis.access.geoip.city_name","iis.error.remote_ip","iis.error.server_ip","iis.error.http_version","iis.error.method","iis.error.url","iis.error.reason_phrase","iis.error.queue_name","iis.error.geoip.continent_name","iis.error.geoip.country_iso_code","iis.error.geoip.region_name","iis.error.geoip.city_name","kafka.log.timestamp","kafka.log.level","kafka.log.message","kafka.log.component","kafka.log.class","kafka.log.trace.class","kafka.log.trace.message","kafka.log.trace.full","kibana.log.tags","kibana.log.state","logstash.log.message","logstash.log.level","logstash.log.module","logstash.log.thread","logstash.slowlog.message","logstash.slowlog.level","logstash.slowlog.module","logstash.slowlog.thread","logstash.slowlog.event","logstash.slowlog.plugin_name","logstash.slowlog.plugin_type","logstash.slowlog.plugin_params","mongodb.log.severity","mongodb.log.component","mongodb.log.context","mongodb.log.message","mysql.error.timestamp","mysql.error.level","mysql.error.message","mysql.slowlog.user","mysql.slowlog.host","mysql.slowlog.ip","mysql.slowlog.query","nginx.access.remote_ip","nginx.access.user_name","nginx.access.method","nginx.access.url","nginx.access.http_version","nginx.access.referrer","nginx.access.agent","nginx.access.user_agent.device","nginx.access.user_agent.patch","nginx.access.user_agent.name","nginx.access.user_agent.os","nginx.access.user_agent.os_name","nginx.access.geoip.continent_name","nginx.access.geoip.country_iso_code","nginx.access.geoip.region_name","nginx.access.geoip.city_name","nginx.error.level","nginx.error.message","osquery.result.name","osquery.result.action","osquery.result.host_identifier","osquery.result.calendar_time","postgresql.log.timestamp","postgresql.log.timezone","postgresql.log.user","postgresql.log.database","postgresql.log.level","postgresql.log.query","postgresql.log.message","redis.log.role","redis.log.level","redis.log.message","redis.slowlog.cmd","redis.slowlog.key","redis.slowlog.args","system.auth.timestamp","system.auth.hostname","system.auth.program","system.auth.message","system.auth.user","system.auth.ssh.event","system.auth.ssh.method","system.auth.ssh.signature","system.auth.ssh.geoip.continent_name","system.auth.ssh.geoip.city_name","system.auth.ssh.geoip.region_name","system.auth.ssh.geoip.country_iso_code","system.auth.sudo.error","system.auth.sudo.tty","system.auth.sudo.pwd","system.auth.sudo.user","system.auth.sudo.command","system.auth.useradd.name","system.auth.useradd.home","system.auth.useradd.shell","system.auth.groupadd.name","system.syslog.timestamp","system.syslog.hostname","system.syslog.program","system.syslog.pid","system.syslog.message","traefik.access.remote_ip","traefik.access.user_name","traefik.access.method","traefik.access.url","traefik.access.http_version","traefik.access.referrer","traefik.access.agent","traefik.access.user_agent.device","traefik.access.user_agent.patch","traefik.access.user_agent.name","traefik.access.user_agent.os","traefik.access.user_agent.os_name","traefik.access.geoip.continent_name","traefik.access.geoip.country_iso_code","traefik.access.geoip.region_name","traefik.access.geoip.city_name","traefik.access.frontend_name","traefik.access.backend_url","fields.*"]},"refresh_interval":"5s"}}
[00:02:01]                 │ info [infra/metrics_and_logs] Indexed 11063 docs into "metricbeat-7.0.0-alpha1-2018.10.17"
[00:02:01]                 │ info [infra/metrics_and_logs] Indexed 1632 docs into "filebeat-7.0.0-alpha1-2018.10.17"
[00:02:01]               └-: /log_entries/entries
[00:02:01]                 └-> "before all" hook
[00:02:03]                 └-: with a configured source
[00:02:03]                   └-> "before all" hook
[00:02:03]                   └-> "before all" hook
[00:02:03]                     │ info [empty_kibana] Loading "mappings.json"
[00:02:03]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2/lFHJp1FwTMyAeew-SR16Gw] deleting index
[00:02:03]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_1/60PTPtdYRgKX1Teq2NedJw] deleting index
[00:02:03]                     │ info [empty_kibana] Deleted existing index [".kibana_2",".kibana_1"]
[00:02:03]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana] creating index, cause [api], templates [], shards [1]/[1]
[00:02:03]                     │ info [empty_kibana] Created index ".kibana"
[00:02:03]                     │ debg [empty_kibana] ".kibana" settings {"index":{"number_of_replicas":"1","number_of_shards":"1"}}
[00:02:03]                     │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana/mvcGZgXZRXO3ssEPInyIiA] update_mapping [_doc]
[00:02:03]                     │ debg Migrating saved objects
[00:02:03]                     │ proc [kibana]   log   [14:15:23.245] [info][savedobjects-service] Creating index .kibana_2.
[00:02:03]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2] creating index, cause [api], templates [], shards [1]/[1]
[00:02:03]                     │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] updating number_of_replicas to [0] for indices [.kibana_2]
[00:02:03]                     │ proc [kibana]   log   [14:15:23.296] [info][savedobjects-service] Reindexing .kibana to .kibana_1
[00:02:03]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1]
[00:02:03]                     │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] updating number_of_replicas to [0] for indices [.kibana_1]
[00:02:03]                     │ info [o.e.t.LoggingTaskListener] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] 19740 finished with response BulkByScrollResponse[took=1.5ms,timed_out=false,sliceId=null,updated=0,created=0,deleted=0,batches=0,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:02:04]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana/mvcGZgXZRXO3ssEPInyIiA] deleting index
[00:02:04]                     │ proc [kibana]   log   [14:15:23.619] [info][savedobjects-service] Migrating .kibana_1 saved objects to .kibana_2
[00:02:04]                     │ proc [kibana]   log   [14:15:23.624] [info][savedobjects-service] Pointing alias .kibana to .kibana_2.
[00:02:04]                     │ proc [kibana]   log   [14:15:23.645] [info][savedobjects-service] Finished in 401ms.
[00:02:04]                     │ debg Creating Infra UI source configuration "default" with properties {"name":"Test Source","logColumns":[{"timestampColumn":{"id":"18e384e7-7174-4d94-b207-08960f477c43"}},{"fieldColumn":{"id":"e2c18d9f-ae24-4d5e-8fe8-898aa12d1c89","field":"host.name"}},{"fieldColumn":{"id":"0f5060f6-fa8a-4858-b641-b92d3c9c7b1e","field":"event.dataset"}},{"messageColumn":{"id":"32aeefb2-463b-458c-9a55-e97eac542f0d"}}]}
[00:02:04]                     │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2/saJSph1CQ3ydy9c_ReUzKw] update_mapping [_doc]
[00:02:04]                     └- ✖ fail: apis MetricsUI Endpoints log entry apis /log_entries/entries with a configured source "before all" hook for "returns the configured columns"
[00:02:04]                     │      Error: Network error: Server response was missing for query 'createSource'.
[00:02:04]                     │       at new ApolloError (/dev/shm/workspace/kibana/node_modules/src/errors/ApolloError.ts:56:5)
[00:02:04]                     │       at Object.error (/dev/shm/workspace/kibana/node_modules/src/core/QueryManager.ts:296:13)
[00:02:04]                     │       at notifySubscription (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:130:18)
[00:02:04]                     │       at onNotify (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:161:3)
[00:02:04]                     │       at SubscriptionObserver.error (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:220:7)
[00:02:04]                     │       at /dev/shm/workspace/kibana/node_modules/apollo-link-http/src/httpLink.ts:184:20
[00:02:04]                     │       at runMicrotasks (<anonymous>)
[00:02:04]                     │       at processTicksAndRejections (internal/process/task_queues.js:97:5)
[00:02:04]                     │ 
[00:02:04]                     │ 

Stack Trace

ApolloError: Network error: Server response was missing for query 'createSource'.
    at new ApolloError (/dev/shm/workspace/kibana/node_modules/src/errors/ApolloError.ts:56:5)
    at Object.error (/dev/shm/workspace/kibana/node_modules/src/core/QueryManager.ts:296:13)
    at notifySubscription (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:130:18)
    at onNotify (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:161:3)
    at SubscriptionObserver.error (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:220:7)
    at /dev/shm/workspace/kibana/node_modules/apollo-link-http/src/httpLink.ts:184:20
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5) {
  graphQLErrors: [],
  networkError: Error [ServerError]: Server response was missing for query 'createSource'.
      at Object.exports.throwServerError (/dev/shm/workspace/kibana/node_modules/apollo-link-http-common/src/index.ts:114:17)
      at /dev/shm/workspace/kibana/node_modules/apollo-link-http-common/src/index.ts:159:11
      at runMicrotasks (<anonymous>)
      at processTicksAndRejections (internal/process/task_queues.js:97:5) {
    response: Response {
      size: 0,
      timeout: 0,
      [Symbol(Body internals)]: [Object],
      [Symbol(Response internals)]: [Object]
    },
    statusCode: 200,
    result: {
      graphqlResponse: '{"data":{"createSource":{"source":{"id":"default","version":"WzEsMV0=","configuration":{"name":"Test Source","logColumns":[{"timestampColumn":{"id":"18e384e7-7174-4d94-b207-08960f477c43","__typename":"InfraSourceTimestampLogColumnAttributes"},"__typename":"InfraSourceTimestampLogColumn"},{"fieldColumn":{"id":"e2c18d9f-ae24-4d5e-8fe8-898aa12d1c89","field":"host.name","__typename":"InfraSourceFieldLogColumnAttributes"},"__typename":"InfraSourceFieldLogColumn"},{"fieldColumn":{"id":"0f5060f6-fa8a-4858-b641-b92d3c9c7b1e","field":"event.dataset","__typename":"InfraSourceFieldLogColumnAttributes"},"__typename":"InfraSourceFieldLogColumn"},{"messageColumn":{"id":"32aeefb2-463b-458c-9a55-e97eac542f0d","__typename":"InfraSourceMessageLogColumnAttributes"},"__typename":"InfraSourceMessageLogColumn"}],"__typename":"InfraSourceConfiguration"},"__typename":"InfraSource"},"__typename":"UpdateSourceResult"}}}\n',
      responseInit: [Object]
    }
  },
  extraInfo: undefined
}

X-Pack API Integration Tests.x-pack/test/api_integration/apis/metrics_ui/log_entries·ts.apis MetricsUI Endpoints log entry apis /log_entries/entries with a configured source "before all" hook for "returns the configured columns"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:01:36]           └-: MetricsUI Endpoints
[00:01:36]             └-> "before all" hook
[00:02:03]             └-: log entry apis
[00:02:03]               └-> "before all" hook
[00:02:03]               └-> "before all" hook
[00:02:03]                 │ info [infra/metrics_and_logs] Loading "mappings.json"
[00:02:03]                 │ info [infra/metrics_and_logs] Loading "data.json.gz"
[00:02:03]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [metricbeat-7.0.0-alpha1-2018.10.17] creating index, cause [api], templates [], shards [1]/[0]
[00:02:03]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[metricbeat-7.0.0-alpha1-2018.10.17][0]]])." previous.health="YELLOW" reason="shards started [[metricbeat-7.0.0-alpha1-2018.10.17][0]]"
[00:02:03]                 │ info [infra/metrics_and_logs] Created index "metricbeat-7.0.0-alpha1-2018.10.17"
[00:02:03]                 │ debg [infra/metrics_and_logs] "metricbeat-7.0.0-alpha1-2018.10.17" settings {"index":{"codec":"best_compression","mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"0","number_of_shards":"1","query":{"default_field":["beat.name","beat.hostname","beat.timezone","beat.version","tags","error.message","error.type","meta.cloud.provider","meta.cloud.instance_id","meta.cloud.instance_name","meta.cloud.machine_type","meta.cloud.availability_zone","meta.cloud.project_id","meta.cloud.region","docker.container.id","docker.container.image","docker.container.name","host.name","host.id","host.architecture","host.os.platform","host.os.version","host.os.family","host.mac","kubernetes.pod.name","kubernetes.pod.uid","kubernetes.namespace","kubernetes.node.name","kubernetes.container.name","kubernetes.container.image","metricset.module","metricset.name","metricset.host","metricset.namespace","type","service.name","aerospike.namespace.name","aerospike.namespace.node.host","aerospike.namespace.node.name","apache.status.hostname","ceph.cluster_health.overall_status","ceph.cluster_health.timechecks.round.status","ceph.monitor_health.health","ceph.monitor_health.name","ceph.osd_df.name","ceph.osd_df.device_class","ceph.osd_tree.name","ceph.osd_tree.type","ceph.osd_tree.children","ceph.osd_tree.status","ceph.osd_tree.device_class","ceph.osd_tree.father","ceph.pool_disk.name","couchbase.bucket.name","couchbase.bucket.type","couchbase.node.hostname","docker.container.command","docker.container.status","docker.container.ip_addresses","docker.healthcheck.status","docker.healthcheck.event.output","docker.image.id.current","docker.image.id.parent","docker.info.id","docker.network.interface","elasticsearch.cluster.name","elasticsearch.cluster.id","elasticsearch.cluster.state.id","elasticsearch.index.name","elasticsearch.node.name","elasticsearch.node.version","elasticsearch.node.jvm.version","elasticsearch.cluster.pending_task.source","elasticsearch.shard.state","etcd.leader.leader","etcd.self.id","etcd.self.leaderinfo.leader","etcd.self.leaderinfo.starttime","etcd.self.leaderinfo.uptime","etcd.self.name","etcd.self.starttime","etcd.self.state","golang.expvar.cmdline","golang.heap.cmdline","graphite.server.example","haproxy.stat.status","haproxy.stat.service_name","haproxy.stat.check.status","haproxy.stat.check.health.last","haproxy.stat.proxy.name","http.request.method","http.request.body","http.response.code","http.response.phrase","http.response.body","kafka.consumergroup.broker.address","kafka.consumergroup.id","kafka.consumergroup.topic","kafka.consumergroup.meta","kafka.consumergroup.client.id","kafka.consumergroup.client.host","kafka.consumergroup.client.member_id","kafka.partition.topic.name","kafka.partition.broker.address","kibana.stats.cluster_uuid","kibana.stats.name","kibana.stats.uuid","kibana.stats.version.number","kibana.stats.status.overall.state","kibana.status.name","kibana.status.uuid","kibana.status.version.number","kibana.status.status.overall.state","kubernetes.apiserver.request.client","kubernetes.apiserver.request.resource","kubernetes.apiserver.request.subresource","kubernetes.apiserver.request.scope","kubernetes.apiserver.request.verb","kubernetes.event.message","kubernetes.event.reason","kubernetes.event.type","kubernetes.event.metadata.name","kubernetes.event.metadata.namespace","kubernetes.event.metadata.resource_version","kubernetes.event.metadata.uid","kubernetes.event.metadata.self_link","kubernetes.event.involved_object.api_version","kubernetes.event.involved_object.kind","kubernetes.event.involved_object.name","kubernetes.event.involved_object.resource_version","kubernetes.event.involved_object.uid","kubernetes.container.id","kubernetes.container.status.phase","kubernetes.container.status.reason","kubernetes.deployment.name","kubernetes.node.status.ready","kubernetes.pod.status.phase","kubernetes.pod.status.ready","kubernetes.pod.status.scheduled","kubernetes.replicaset.name","kubernetes.statefulset.name","kubernetes.system.container","kubernetes.volume.name","kvm.dommemstat.stat.name","kvm.dommemstat.name","logstash.node.host","logstash.node.version","logstash.node.jvm.version","mongodb.collstats.db","mongodb.collstats.collection","mongodb.collstats.name","mongodb.dbstats.db","mongodb.status.version","mongodb.status.storage_engine.name","mysql.galera_status.cluster.status","mysql.galera_status.connected","mysql.galera_status.evs.evict","mysql.galera_status.evs.state","mysql.galera_status.local.state","mysql.galera_status.ready","nginx.stubstatus.hostname","php_fpm.pool.name","php_fpm.pool.process_manager","postgresql.activity.database.name","postgresql.activity.user.name","postgresql.activity.application_name","postgresql.activity.client.address","postgresql.activity.client.hostname","postgresql.activity.state","postgresql.activity.query","postgresql.database.name","postgresql.statement.query.text","rabbitmq.connection.name","rabbitmq.connection.vhost","rabbitmq.connection.user","rabbitmq.connection.node","rabbitmq.connection.type","rabbitmq.connection.host","rabbitmq.connection.peer.host","rabbitmq.exchange.name","rabbitmq.exchange.vhost","rabbitmq.exchange.user","rabbitmq.node.name","rabbitmq.node.type","rabbitmq.queue.name","rabbitmq.queue.vhost","rabbitmq.queue.node","rabbitmq.queue.state","redis.info.memory.max.policy","redis.info.memory.allocator","redis.info.persistence.rdb.bgsave.last_status","redis.info.persistence.aof.bgrewrite.last_status","redis.info.persistence.aof.write.last_status","redis.info.replication.role","redis.info.server.version","redis.info.server.git_sha1","redis.info.server.git_dirty","redis.info.server.build_id","redis.info.server.mode","redis.info.server.os","redis.info.server.arch_bits","redis.info.server.multiplexing_api","redis.info.server.gcc_version","redis.info.server.run_id","redis.info.server.config_file","redis.keyspace.id","system.diskio.name","system.diskio.serial_number","system.filesystem.device_name","system.filesystem.type","system.filesystem.mount_point","system.network.name","system.process.name","system.process.state","system.process.cmdline","system.process.username","system.process.cwd","system.process.cgroup.id","system.process.cgroup.path","system.process.cgroup.cpu.id","system.process.cgroup.cpu.path","system.process.cgroup.cpuacct.id","system.process.cgroup.cpuacct.path","system.process.cgroup.memory.id","system.process.cgroup.memory.path","system.process.cgroup.blkio.id","system.process.cgroup.blkio.path","system.raid.name","system.raid.activity_state","system.socket.direction","system.socket.family","system.socket.remote.host","system.socket.remote.etld_plus_one","system.socket.remote.host_error","system.socket.process.command","system.socket.process.cmdline","system.socket.process.exe","system.socket.user.name","uwsgi.status.worker.status","uwsgi.status.worker.rss","vsphere.datastore.name","vsphere.datastore.fstype","vsphere.host.name","vsphere.host.network_names","vsphere.virtualmachine.host","vsphere.virtualmachine.name","vsphere.virtualmachine.network_names","windows.service.id","windows.service.name","windows.service.display_name","windows.service.start_type","windows.service.state","windows.service.exit_code","zookeeper.mntr.hostname","zookeeper.mntr.server_state","zookeeper.mntr.version","fields.*"]},"refresh_interval":"5s"}}
[00:02:03]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [filebeat-7.0.0-alpha1-2018.10.17] creating index, cause [api], templates [], shards [1]/[0]
[00:02:03]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[filebeat-7.0.0-alpha1-2018.10.17][0]]])." previous.health="YELLOW" reason="shards started [[filebeat-7.0.0-alpha1-2018.10.17][0]]"
[00:02:03]                 │ info [infra/metrics_and_logs] Created index "filebeat-7.0.0-alpha1-2018.10.17"
[00:02:03]                 │ debg [infra/metrics_and_logs] "filebeat-7.0.0-alpha1-2018.10.17" settings {"index":{"codec":"best_compression","mapping":{"total_fields":{"limit":"10000"}},"number_of_replicas":"0","number_of_shards":"1","query":{"default_field":["beat.name","beat.hostname","beat.timezone","beat.version","tags","error.message","error.type","meta.cloud.provider","meta.cloud.instance_id","meta.cloud.instance_name","meta.cloud.machine_type","meta.cloud.availability_zone","meta.cloud.project_id","meta.cloud.region","docker.container.id","docker.container.image","docker.container.name","host.name","host.id","host.architecture","host.os.platform","host.os.version","host.os.family","host.mac","kubernetes.pod.name","kubernetes.pod.uid","kubernetes.namespace","kubernetes.node.name","kubernetes.container.name","kubernetes.container.image","source","message","stream","prospector.type","input.type","read_timestamp","fileset.module","fileset.name","syslog.severity_label","syslog.facility_label","process.program","service.name","log.level","apache2.access.remote_ip","apache2.access.user_name","apache2.access.method","apache2.access.url","apache2.access.http_version","apache2.access.referrer","apache2.access.agent","apache2.access.user_agent.device","apache2.access.user_agent.patch","apache2.access.user_agent.name","apache2.access.user_agent.os","apache2.access.user_agent.os_name","apache2.access.geoip.continent_name","apache2.access.geoip.country_iso_code","apache2.access.geoip.region_name","apache2.access.geoip.city_name","apache2.error.level","apache2.error.client","apache2.error.message","apache2.error.module","auditd.log.record_type","auditd.log.old_auid","auditd.log.new_auid","auditd.log.old_ses","auditd.log.new_ses","auditd.log.acct","auditd.log.pid","auditd.log.ppid","auditd.log.items","auditd.log.item","auditd.log.a0","auditd.log.res","auditd.log.geoip.continent_name","auditd.log.geoip.city_name","auditd.log.geoip.region_name","auditd.log.geoip.country_iso_code","elasticsearch.audit.node_name","elasticsearch.audit.layer","elasticsearch.audit.event_type","elasticsearch.audit.origin_type","elasticsearch.audit.principal","elasticsearch.audit.action","elasticsearch.audit.uri","elasticsearch.audit.request","elasticsearch.audit.request_body","elasticsearch.gc.tags","elasticsearch.server.component","elasticsearch.slowlog.loglevel","elasticsearch.slowlog.logger","elasticsearch.slowlog.node_name","elasticsearch.slowlog.index_name","elasticsearch.slowlog.shard_id","elasticsearch.slowlog.took","elasticsearch.slowlog.types","elasticsearch.slowlog.stats","elasticsearch.slowlog.search_type","elasticsearch.slowlog.source_query","elasticsearch.slowlog.extra_source","elasticsearch.slowlog.took_millis","elasticsearch.slowlog.total_hits","elasticsearch.slowlog.total_shards","icinga.debug.facility","icinga.debug.severity","icinga.debug.message","icinga.main.facility","icinga.main.severity","icinga.main.message","icinga.startup.facility","icinga.startup.severity","icinga.startup.message","iis.access.server_ip","iis.access.method","iis.access.url","iis.access.query_string","iis.access.user_name","iis.access.remote_ip","iis.access.referrer","iis.access.site_name","iis.access.server_name","iis.access.http_version","iis.access.cookie","iis.access.hostname","iis.access.agent","iis.access.user_agent.device","iis.access.user_agent.patch","iis.access.user_agent.name","iis.access.user_agent.os","iis.access.user_agent.os_name","iis.access.geoip.continent_name","iis.access.geoip.country_iso_code","iis.access.geoip.region_name","iis.access.geoip.city_name","iis.error.remote_ip","iis.error.server_ip","iis.error.http_version","iis.error.method","iis.error.url","iis.error.reason_phrase","iis.error.queue_name","iis.error.geoip.continent_name","iis.error.geoip.country_iso_code","iis.error.geoip.region_name","iis.error.geoip.city_name","kafka.log.timestamp","kafka.log.level","kafka.log.message","kafka.log.component","kafka.log.class","kafka.log.trace.class","kafka.log.trace.message","kafka.log.trace.full","kibana.log.tags","kibana.log.state","logstash.log.message","logstash.log.level","logstash.log.module","logstash.log.thread","logstash.slowlog.message","logstash.slowlog.level","logstash.slowlog.module","logstash.slowlog.thread","logstash.slowlog.event","logstash.slowlog.plugin_name","logstash.slowlog.plugin_type","logstash.slowlog.plugin_params","mongodb.log.severity","mongodb.log.component","mongodb.log.context","mongodb.log.message","mysql.error.timestamp","mysql.error.level","mysql.error.message","mysql.slowlog.user","mysql.slowlog.host","mysql.slowlog.ip","mysql.slowlog.query","nginx.access.remote_ip","nginx.access.user_name","nginx.access.method","nginx.access.url","nginx.access.http_version","nginx.access.referrer","nginx.access.agent","nginx.access.user_agent.device","nginx.access.user_agent.patch","nginx.access.user_agent.name","nginx.access.user_agent.os","nginx.access.user_agent.os_name","nginx.access.geoip.continent_name","nginx.access.geoip.country_iso_code","nginx.access.geoip.region_name","nginx.access.geoip.city_name","nginx.error.level","nginx.error.message","osquery.result.name","osquery.result.action","osquery.result.host_identifier","osquery.result.calendar_time","postgresql.log.timestamp","postgresql.log.timezone","postgresql.log.user","postgresql.log.database","postgresql.log.level","postgresql.log.query","postgresql.log.message","redis.log.role","redis.log.level","redis.log.message","redis.slowlog.cmd","redis.slowlog.key","redis.slowlog.args","system.auth.timestamp","system.auth.hostname","system.auth.program","system.auth.message","system.auth.user","system.auth.ssh.event","system.auth.ssh.method","system.auth.ssh.signature","system.auth.ssh.geoip.continent_name","system.auth.ssh.geoip.city_name","system.auth.ssh.geoip.region_name","system.auth.ssh.geoip.country_iso_code","system.auth.sudo.error","system.auth.sudo.tty","system.auth.sudo.pwd","system.auth.sudo.user","system.auth.sudo.command","system.auth.useradd.name","system.auth.useradd.home","system.auth.useradd.shell","system.auth.groupadd.name","system.syslog.timestamp","system.syslog.hostname","system.syslog.program","system.syslog.pid","system.syslog.message","traefik.access.remote_ip","traefik.access.user_name","traefik.access.method","traefik.access.url","traefik.access.http_version","traefik.access.referrer","traefik.access.agent","traefik.access.user_agent.device","traefik.access.user_agent.patch","traefik.access.user_agent.name","traefik.access.user_agent.os","traefik.access.user_agent.os_name","traefik.access.geoip.continent_name","traefik.access.geoip.country_iso_code","traefik.access.geoip.region_name","traefik.access.geoip.city_name","traefik.access.frontend_name","traefik.access.backend_url","fields.*"]},"refresh_interval":"5s"}}
[00:02:06]                 │ info [infra/metrics_and_logs] Indexed 11063 docs into "metricbeat-7.0.0-alpha1-2018.10.17"
[00:02:06]                 │ info [infra/metrics_and_logs] Indexed 1632 docs into "filebeat-7.0.0-alpha1-2018.10.17"
[00:02:07]               └-: /log_entries/entries
[00:02:07]                 └-> "before all" hook
[00:02:09]                 └-: with a configured source
[00:02:09]                   └-> "before all" hook
[00:02:09]                   └-> "before all" hook
[00:02:09]                     │ info [empty_kibana] Loading "mappings.json"
[00:02:09]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2/y2peWszjRm6OkWYSZEQDdQ] deleting index
[00:02:09]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_1/ykamdARESBCS-hRv6-SCzA] deleting index
[00:02:09]                     │ info [empty_kibana] Deleted existing index [".kibana_2",".kibana_1"]
[00:02:09]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana] creating index, cause [api], templates [], shards [1]/[1]
[00:02:09]                     │ info [empty_kibana] Created index ".kibana"
[00:02:09]                     │ debg [empty_kibana] ".kibana" settings {"index":{"number_of_replicas":"1","number_of_shards":"1"}}
[00:02:09]                     │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana/KzmxwAOKSUC6bdMNb6mEkw] update_mapping [_doc]
[00:02:09]                     │ debg Migrating saved objects
[00:02:09]                     │ proc [kibana]   log   [13:57:48.597] [info][savedobjects-service] Creating index .kibana_2.
[00:02:09]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2] creating index, cause [api], templates [], shards [1]/[1]
[00:02:09]                     │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] updating number_of_replicas to [0] for indices [.kibana_2]
[00:02:09]                     │ proc [kibana]   log   [13:57:48.651] [info][savedobjects-service] Reindexing .kibana to .kibana_1
[00:02:09]                     │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1]
[00:02:09]                     │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] updating number_of_replicas to [0] for indices [.kibana_1]
[00:02:09]                     │ info [o.e.t.LoggingTaskListener] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] 19776 finished with response BulkByScrollResponse[took=1.3ms,timed_out=false,sliceId=null,updated=0,created=0,deleted=0,batches=0,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:02:09]                     │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana/KzmxwAOKSUC6bdMNb6mEkw] deleting index
[00:02:09]                     │ proc [kibana]   log   [13:57:48.977] [info][savedobjects-service] Migrating .kibana_1 saved objects to .kibana_2
[00:02:09]                     │ proc [kibana]   log   [13:57:48.982] [info][savedobjects-service] Pointing alias .kibana to .kibana_2.
[00:02:09]                     │ proc [kibana]   log   [13:57:49.008] [info][savedobjects-service] Finished in 413ms.
[00:02:09]                     │ debg Creating Infra UI source configuration "default" with properties {"name":"Test Source","logColumns":[{"timestampColumn":{"id":"5f69b3bc-38a4-4430-9366-f293c176cb46"}},{"fieldColumn":{"id":"05a0a12c-e214-444b-982a-4b14a91caa37","field":"host.name"}},{"fieldColumn":{"id":"c857e860-0c56-4ecc-9e7b-46629fab9e1f","field":"event.dataset"}},{"messageColumn":{"id":"eb71742a-0a18-4738-8f41-57257fcc1779"}}]}
[00:02:09]                     │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-debian-tests-xxl-1605877881367589772] [.kibana_2/iWAoqgEPSgmGHL1KVM6vtQ] update_mapping [_doc]
[00:02:10]                     └- ✖ fail: apis MetricsUI Endpoints log entry apis /log_entries/entries with a configured source "before all" hook for "returns the configured columns"
[00:02:10]                     │      Error: Network error: Server response was missing for query 'createSource'.
[00:02:10]                     │       at new ApolloError (/dev/shm/workspace/kibana/node_modules/src/errors/ApolloError.ts:56:5)
[00:02:10]                     │       at Object.error (/dev/shm/workspace/kibana/node_modules/src/core/QueryManager.ts:296:13)
[00:02:10]                     │       at notifySubscription (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:130:18)
[00:02:10]                     │       at onNotify (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:161:3)
[00:02:10]                     │       at SubscriptionObserver.error (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:220:7)
[00:02:10]                     │       at /dev/shm/workspace/kibana/node_modules/apollo-link-http/src/httpLink.ts:184:20
[00:02:10]                     │       at runMicrotasks (<anonymous>)
[00:02:10]                     │       at processTicksAndRejections (internal/process/task_queues.js:97:5)
[00:02:10]                     │ 
[00:02:10]                     │ 

Stack Trace

ApolloError: Network error: Server response was missing for query 'createSource'.
    at new ApolloError (/dev/shm/workspace/kibana/node_modules/src/errors/ApolloError.ts:56:5)
    at Object.error (/dev/shm/workspace/kibana/node_modules/src/core/QueryManager.ts:296:13)
    at notifySubscription (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:130:18)
    at onNotify (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:161:3)
    at SubscriptionObserver.error (/dev/shm/workspace/kibana/node_modules/zen-observable/lib/Observable.js:220:7)
    at /dev/shm/workspace/kibana/node_modules/apollo-link-http/src/httpLink.ts:184:20
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5) {
  graphQLErrors: [],
  networkError: Error [ServerError]: Server response was missing for query 'createSource'.
      at Object.exports.throwServerError (/dev/shm/workspace/kibana/node_modules/apollo-link-http-common/src/index.ts:114:17)
      at /dev/shm/workspace/kibana/node_modules/apollo-link-http-common/src/index.ts:159:11
      at runMicrotasks (<anonymous>)
      at processTicksAndRejections (internal/process/task_queues.js:97:5) {
    response: Response {
      size: 0,
      timeout: 0,
      [Symbol(Body internals)]: [Object],
      [Symbol(Response internals)]: [Object]
    },
    statusCode: 200,
    result: {
      graphqlResponse: '{"data":{"createSource":{"source":{"id":"default","version":"WzEsMV0=","configuration":{"name":"Test Source","logColumns":[{"timestampColumn":{"id":"5f69b3bc-38a4-4430-9366-f293c176cb46","__typename":"InfraSourceTimestampLogColumnAttributes"},"__typename":"InfraSourceTimestampLogColumn"},{"fieldColumn":{"id":"05a0a12c-e214-444b-982a-4b14a91caa37","field":"host.name","__typename":"InfraSourceFieldLogColumnAttributes"},"__typename":"InfraSourceFieldLogColumn"},{"fieldColumn":{"id":"c857e860-0c56-4ecc-9e7b-46629fab9e1f","field":"event.dataset","__typename":"InfraSourceFieldLogColumnAttributes"},"__typename":"InfraSourceFieldLogColumn"},{"messageColumn":{"id":"eb71742a-0a18-4738-8f41-57257fcc1779","__typename":"InfraSourceMessageLogColumnAttributes"},"__typename":"InfraSourceMessageLogColumn"}],"__typename":"InfraSourceConfiguration"},"__typename":"InfraSource"},"__typename":"UpdateSourceResult"}}}\n',
      responseInit: [Object]
    }
  },
  extraInfo: undefined
}

and 1 more failures, only showing the first 3.

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
enterpriseSearch 731.8KB 731.8KB -29.0B

Distributable file count

id before after diff
default 42956 43634 +678

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
upgradeAssistant 60.5KB 60.5KB -29.0B

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@jasonrhodes
Copy link
Member

@rylnd are you all still using apollo? We are close to having it removed in Metrics and then we no longer need this bump. I'm curious if I should push hard on that or if we should just bite the bullet and bump for the security. The work has been pushed back multiple times on our side.

@rylnd
Copy link
Contributor

rylnd commented Nov 30, 2020

@jasonrhodes while most of our graphQL endpoints have been replaced with search strategy usage, we've still got several relating to timelines. I don't believe there's any technical blocker there (@XavierM can correct me) but migrating those likely won't happen before 7.12.

@azasypkin
Copy link
Member

@jasonrhodes while most of our graphQL endpoints have been replaced with search strategy usage, we've still got several relating to timelines. I don't believe there's any technical blocker there (@XavierM can correct me) but migrating those likely won't happen before 7.12.

Hey @rylnd,

It looks like there is only one place left where we still directly use apollo-server-core. Are you still planning to migrate from it any time soon? If not, maybe you can try to upgrade it then?

I tried to do it myself, but my GraphQL/Appolo knowledge isn't enough to be sure that I fix your code properly.

@jportner jportner closed this Mar 9, 2021
@jportner jportner deleted the bump-apollo-server-core branch March 9, 2021 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore release_note:skip Skip the PR/issue when compiling release notes v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants