-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Monitoring] Remove deprecated watcher-based cluster alerts #85047
[Monitoring] Remove deprecated watcher-based cluster alerts #85047
Conversation
Pinging @elastic/stack-monitoring (Team:Monitoring) |
I am assuming the API is called after we have successfully created replacement Kibana alerts.
|
Thanks @ravikesarwani!
I don't think we have any docs for this, outside of general watch deletion docs. Should we create new ones, or link to what we have? |
Linking to existing DELETE Watch API is fine. |
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrisronline Great job 🥇
We should make a note though to revert this PR if for some reason: #87377 does not get merged into 7.12
@elasticmachine merge upstream |
💚 Build SucceededMetrics [docs]Async chunks
History
To update your PR or re-run it, just comment with: |
…85047) * First draft * Update to use actual API * Remove this file * Update translation key Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
…88043) * First draft * Update to use actual API * Remove this file * Update translation key Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com> Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Relates to #81020
This PR adds logic to call a new API introduced by the ES team (elastic/elasticsearch#50032) to disable and delete legacy watcher-based cluster alerts.
I'm not sure if we should surface a toast to the user if this action fails (the PR currently does do this), and if we do, I'm not sure what it should say.
In a scenario where the API call fails, the user needs to review the error message and most likely manually delete the watches on the affected cluster(s). We should probably supplement this work with an update to our docs to tell users what this is and steps on removing the watches manually.
Curious to thoughts from @igoristic and @ravikesarwani