-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Ingest Manager] Don't retain POST /setup results. fixes #74587 #75372
Conversation
works, afaict. no tests. one TS issue.
``` firstSuccessOrTryAgain ✓ reject/throws is called again & its value returned (18ms) ✓ the first success value is cached (2ms) ```
Terrible tests. Committing & pushing to see if it fixes failures like https://github.com/elastic/kibana/pull/74507/checks?check_run_id=980178887 https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/67892/execution/node/663/log/ ``` 07:36:56 └-> "before all" hook 07:36:56 └-> should not allow to enroll an agent with a invalid enrollment 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 │ proc [kibana] error [11:36:56.369] Error: Internal Server Error 07:36:56 │ proc [kibana] at HapiResponseAdapter.toError (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:132:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:86:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:81:17) 07:36:56 │ proc [kibana] at Router.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/router.js:164:34) 07:36:56 │ proc [kibana] at process._tickCallback (internal/process/next_tick.js:68:7) 07:36:56 │ proc [kibana] log [11:36:56.581] [info][authentication][plugins][security] Authentication attempt failed: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } } 07:36:56 └- ✓ pass (60ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a invalid enrollment" 07:36:56 └-> should not allow to enroll an agent with a shared id if it already exists 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (111ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a shared id if it already exists " 07:36:56 └-> should not allow to enroll an agent with a version > kibana 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (58ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a version > kibana" 07:36:56 └-> should allow to enroll an agent with a valid enrollment token 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✖ fail: Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should allow to enroll an agent with a valid enrollment token 07:36:56 │ Error: expected 200 "OK", got 500 "Internal Server Error" 07:36:56 │ at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12) 07:36:56 │ at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11) 07:36:56 │ at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18) 07:36:56 │ at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12) 07:36:56 │ at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5 07:36:56 │ at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3) 07:36:56 │ at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18) 07:36:56 │ at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7) 07:36:56 │ at endReadableNT (_stream_readable.js:1145:12) 07:36:56 │ at process._tickCallback (internal/process/next_tick.js:63:19) 07:36:56 │ 07:36:56 │ ```
`firstPromiseBlocksAndFufills` for "the first promise created blocks others from being created, then fufills all with that first result"
Add explicit `isPending` value instead of overloading role of `status`. Could probably do without it, but it makes the intent more clear.
@@ -40,14 +47,19 @@ export const registerRoutes = (router: IRouter, config: IngestManagerConfigType) | |||
}, | |||
getFleetStatusHandler | |||
); | |||
}; | |||
|
|||
export const registerRoutes = (router: IRouter, config: IngestManagerConfigType) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I split this function up and exported the individual register*Route functions for some now-removed tests. I left it because it seems useful is common in other Kibana plugins.
Pinging @elastic/ingest-management (Team:Ingest Management) |
@elasticmachine merge upstream |
merge conflict between base and head |
💛 Build succeeded, but was flaky
Test FailuresChrome UI Functional Tests.test/functional/apps/visualize/_vega_chart·ts.visualize app vega chart in visualize app vega chart with filters should render different data in response to filter changeStandard Out
Stack Trace
Build metrics
History
To update your PR or re-run it, just comment with: |
Looks good. Wondering if there's been any thought to @nchaulet 's idea to use a SO to track the status of the setup as we haven't solved the issue of multiple instances of Kibana with setup? |
@neptunian there's an issue tracking it #70333 take a look and please leave any thoughts about the open questions |
…) (#75583) * add retries for registry requests. works, afaict. no tests. one TS issue. * Fix TS issue. Add link to node-fetch error docs * Restore some accidentally deleted code. * Add more comments. Remove logging. * Add tests for plugin setup service & handlers * Add tests for Registry retry logic * Extract setup retry logic to separate function/file * Add tests for setup retry logic ``` firstSuccessOrTryAgain ✓ reject/throws is called again & its value returned (18ms) ✓ the first success value is cached (2ms) ``` * More straightforward(?) tests for setup caching * Revert cached setup. Still limit 1 call at a time Terrible tests. Committing & pushing to see if it fixes failures like https://github.com/elastic/kibana/pull/74507/checks?check_run_id=980178887 https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/67892/execution/node/663/log/ ``` 07:36:56 └-> "before all" hook 07:36:56 └-> should not allow to enroll an agent with a invalid enrollment 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 │ proc [kibana] error [11:36:56.369] Error: Internal Server Error 07:36:56 │ proc [kibana] at HapiResponseAdapter.toError (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:132:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:86:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:81:17) 07:36:56 │ proc [kibana] at Router.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/router.js:164:34) 07:36:56 │ proc [kibana] at process._tickCallback (internal/process/next_tick.js:68:7) 07:36:56 │ proc [kibana] log [11:36:56.581] [info][authentication][plugins][security] Authentication attempt failed: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } } 07:36:56 └- ✓ pass (60ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a invalid enrollment" 07:36:56 └-> should not allow to enroll an agent with a shared id if it already exists 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (111ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a shared id if it already exists " 07:36:56 └-> should not allow to enroll an agent with a version > kibana 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (58ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a version > kibana" 07:36:56 └-> should allow to enroll an agent with a valid enrollment token 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✖ fail: Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should allow to enroll an agent with a valid enrollment token 07:36:56 │ Error: expected 200 "OK", got 500 "Internal Server Error" 07:36:56 │ at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12) 07:36:56 │ at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11) 07:36:56 │ at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18) 07:36:56 │ at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12) 07:36:56 │ at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5 07:36:56 │ at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3) 07:36:56 │ at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18) 07:36:56 │ at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7) 07:36:56 │ at endReadableNT (_stream_readable.js:1145:12) 07:36:56 │ at process._tickCallback (internal/process/next_tick.js:63:19) 07:36:56 │ 07:36:56 │ ``` * New name & tests for one-at-a-time /setup behavior `firstPromiseBlocksAndFufills` for "the first promise created blocks others from being created, then fufills all with that first result" * More (better?) renaming * Fix name in test description * Fix spelling typo. * Remove registry retry code & tests * Use async fn's .catch to avoid unhandled rejection Add explicit `isPending` value instead of overloading role of `status`. Could probably do without it, but it makes the intent more clear. Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
elastic#75372) * add retries for registry requests. works, afaict. no tests. one TS issue. * Fix TS issue. Add link to node-fetch error docs * Restore some accidentally deleted code. * Add more comments. Remove logging. * Add tests for plugin setup service & handlers * Add tests for Registry retry logic * Extract setup retry logic to separate function/file * Add tests for setup retry logic ``` firstSuccessOrTryAgain ✓ reject/throws is called again & its value returned (18ms) ✓ the first success value is cached (2ms) ``` * More straightforward(?) tests for setup caching * Revert cached setup. Still limit 1 call at a time Terrible tests. Committing & pushing to see if it fixes failures like https://github.com/elastic/kibana/pull/74507/checks?check_run_id=980178887 https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/67892/execution/node/663/log/ ``` 07:36:56 └-> "before all" hook 07:36:56 └-> should not allow to enroll an agent with a invalid enrollment 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 │ proc [kibana] error [11:36:56.369] Error: Internal Server Error 07:36:56 │ proc [kibana] at HapiResponseAdapter.toError (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:132:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:86:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:81:17) 07:36:56 │ proc [kibana] at Router.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/router.js:164:34) 07:36:56 │ proc [kibana] at process._tickCallback (internal/process/next_tick.js:68:7) 07:36:56 │ proc [kibana] log [11:36:56.581] [info][authentication][plugins][security] Authentication attempt failed: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } } 07:36:56 └- ✓ pass (60ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a invalid enrollment" 07:36:56 └-> should not allow to enroll an agent with a shared id if it already exists 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (111ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a shared id if it already exists " 07:36:56 └-> should not allow to enroll an agent with a version > kibana 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (58ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a version > kibana" 07:36:56 └-> should allow to enroll an agent with a valid enrollment token 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✖ fail: Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should allow to enroll an agent with a valid enrollment token 07:36:56 │ Error: expected 200 "OK", got 500 "Internal Server Error" 07:36:56 │ at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12) 07:36:56 │ at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11) 07:36:56 │ at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18) 07:36:56 │ at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12) 07:36:56 │ at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5 07:36:56 │ at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3) 07:36:56 │ at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18) 07:36:56 │ at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7) 07:36:56 │ at endReadableNT (_stream_readable.js:1145:12) 07:36:56 │ at process._tickCallback (internal/process/next_tick.js:63:19) 07:36:56 │ 07:36:56 │ ``` * New name & tests for one-at-a-time /setup behavior `firstPromiseBlocksAndFufills` for "the first promise created blocks others from being created, then fufills all with that first result" * More (better?) renaming * Fix name in test description * Fix spelling typo. * Remove registry retry code & tests * Use async fn's .catch to avoid unhandled rejection Add explicit `isPending` value instead of overloading role of `status`. Could probably do without it, but it makes the intent more clear. Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
…) (#75587) * add retries for registry requests. works, afaict. no tests. one TS issue. * Fix TS issue. Add link to node-fetch error docs * Restore some accidentally deleted code. * Add more comments. Remove logging. * Add tests for plugin setup service & handlers * Add tests for Registry retry logic * Extract setup retry logic to separate function/file * Add tests for setup retry logic ``` firstSuccessOrTryAgain ✓ reject/throws is called again & its value returned (18ms) ✓ the first success value is cached (2ms) ``` * More straightforward(?) tests for setup caching * Revert cached setup. Still limit 1 call at a time Terrible tests. Committing & pushing to see if it fixes failures like https://github.com/elastic/kibana/pull/74507/checks?check_run_id=980178887 https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/67892/execution/node/663/log/ ``` 07:36:56 └-> "before all" hook 07:36:56 └-> should not allow to enroll an agent with a invalid enrollment 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 │ proc [kibana] error [11:36:56.369] Error: Internal Server Error 07:36:56 │ proc [kibana] at HapiResponseAdapter.toError (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:132:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:86:19) 07:36:56 │ proc [kibana] at HapiResponseAdapter.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:81:17) 07:36:56 │ proc [kibana] at Router.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/router.js:164:34) 07:36:56 │ proc [kibana] at process._tickCallback (internal/process/next_tick.js:68:7) 07:36:56 │ proc [kibana] log [11:36:56.581] [info][authentication][plugins][security] Authentication attempt failed: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } } 07:36:56 └- ✓ pass (60ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a invalid enrollment" 07:36:56 └-> should not allow to enroll an agent with a shared id if it already exists 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (111ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a shared id if it already exists " 07:36:56 └-> should not allow to enroll an agent with a version > kibana 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✓ pass (58ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a version > kibana" 07:36:56 └-> should allow to enroll an agent with a valid enrollment token 07:36:56 └-> "before each" hook: global before each 07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry 07:36:56 └- ✖ fail: Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should allow to enroll an agent with a valid enrollment token 07:36:56 │ Error: expected 200 "OK", got 500 "Internal Server Error" 07:36:56 │ at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12) 07:36:56 │ at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11) 07:36:56 │ at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18) 07:36:56 │ at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12) 07:36:56 │ at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5 07:36:56 │ at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3) 07:36:56 │ at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18) 07:36:56 │ at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7) 07:36:56 │ at endReadableNT (_stream_readable.js:1145:12) 07:36:56 │ at process._tickCallback (internal/process/next_tick.js:63:19) 07:36:56 │ 07:36:56 │ ``` * New name & tests for one-at-a-time /setup behavior `firstPromiseBlocksAndFufills` for "the first promise created blocks others from being created, then fufills all with that first result" * More (better?) renaming * Fix name in test description * Fix spelling typo. * Remove registry retry code & tests * Use async fn's .catch to avoid unhandled rejection Add explicit `isPending` value instead of overloading role of `status`. Could probably do without it, but it makes the intent more clear. Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com> # Conflicts: # x-pack/plugins/ingest_manager/server/services/setup.ts Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Summary
extracted
/setup
only changes from #74507setupIngestManager
on everyPOST /setup
Fixes [Ingest Manager] Plugin won't retry failed initial setup #74587Checklist
Delete any items that are not applicable to this PR.
Added tests