Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.6] Reduce startup time by skipping update mappings step when possible (#145604) #146637

Merged
merged 2 commits into from
Nov 30, 2022

Conversation

gsoldevila
Copy link
Contributor

Backport

This will backport the following commits from main to 8.6:

Questions ?

Please refer to the Backport tool documentation

…lastic#145604)

The goal of this PR is to reduce the startup times of Kibana server by
improving the migration logic.

Fixes elastic#145743
Related elastic#144035)

The migration logic is run systematically at startup, whether the
customers are upgrading or not.
Historically, these steps have been very quick, but we recently found
out about some customers that have more than **one million** Saved
Objects stored, making the overall startup process slow, even when there
are no migrations to perform.

This PR specifically targets the case where there are no migrations to
perform, aka a Kibana node is started against an ES cluster that is
already up to date wrt stack version and list of plugins.

In this scenario, we aim at skipping the `UPDATE_TARGET_MAPPINGS` step
of the migration logic, which internally runs the
`updateAndPickupMappings` method, which turns out to be expensive if the
system indices contain lots of SO.

I locally tested the following scenarios too:

- **Fresh install.** The step is not even run, as the `.kibana` index
did not exist ✅
- **Stack version + list of plugins up to date.** Simply restarting
Kibana after the fresh install. The step is run and leads to `DONE`, as
the md5 hashes match those stored in `.kibana._mapping._meta` ✅
- **Faking re-enabling an old plugin.** I manually removed one of the
MD5 hashes from the stored .kibana._mapping._meta through `curl`, and
then restarted Kibana. The step is run and leads to
`UPDATE_TARGET_MAPPINGS` as it used to before the PR ✅
- **Faking updating a plugin.** Same as the previous one, but altering
an existing md5 stored in the metas. ✅

And that is the curl command used to tamper with the stored _meta:
```bash
curl -X PUT "kibana:changeme@localhost:9200/.kibana/_mapping?pretty" -H 'Content-Type: application/json' -d'
{
  "_meta": {
      "migrationMappingPropertyHashes": {
        "references": "7997cf5a56cc02bdc9c93361bde732b0",
      }
  }
}
'
```

(cherry picked from commit b1e18a0)

# Conflicts:
#	packages/core/saved-objects/core-saved-objects-migration-server-internal/src/actions/index.ts
@gsoldevila gsoldevila merged commit 42bf33f into elastic:8.6 Nov 30, 2022
@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
@kbn/core-saved-objects-migration-server-internal 75 76 +1

Public APIs missing exports

Total count of every type that is part of your API that should be exported but is not. This will cause broken links in the API documentation system. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats exports for more detailed information.

id before after diff
@kbn/core-saved-objects-migration-server-internal 43 44 +1
Unknown metric groups

API count

id before after diff
@kbn/core-saved-objects-migration-server-internal 101 103 +2

ESLint disabled in files

id before after diff
osquery 1 2 +1

ESLint disabled line counts

id before after diff
enterpriseSearch 19 21 +2
fleet 59 65 +6
osquery 108 113 +5
securitySolution 441 447 +6
total +19

Total ESLint disabled count

id before after diff
enterpriseSearch 20 22 +2
fleet 68 74 +6
osquery 109 115 +6
securitySolution 518 524 +6
total +20

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants