-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugin lifecycle methods that return undefined block dependents from inferring the plugin's disabled state #69845
Comments
Pinging @elastic/kibana-platform (Team:Platform) |
I was going to open an issue for this specific problem, as I had a very similar discussion recently here: #69581 (comment) IMHO, the correct solution, even if it's more impacting, would be to simply forbid export interface Plugin<
TSetup = void,
TStart = void,
TPluginsSetup extends object = object,
TPluginsStart extends object = object
> would simply become export interface Plugin<
TSetup extends object ={},
TStart extends object = {},
TPluginsSetup extends object = object,
TPluginsStart extends object = object
> @elastic/kibana-platform what do the others think? |
@pgayvallet I like your proposal for simplicity. OTOH I'm wondering if it semantically closer to the status service and should be part of it. |
You mean the correct way to assert if a plugin is available or not would/should be to use the status service (once #41983 is done), right? |
@pgayvallet @restrry I can understand the desire to use TS to force plugin maintainers to do the right thing. From the phrasing of "correct way to assert if a plugin is available", I'm also inferring that we're seeking a canonical way for developers to determine plugin availability, which I can understand. But I have a strong desire that we consider the developer experience foremost when considering a solution. As a developer, I don't want to have to add an empty return value (and an empty interface!) just to satisfy a TypeScript rule. This would impose significant burden on all Kibana plugin developers to update their plugins to adhere to this rule, for little (zero?) gain to the developers themselves. From a developer's perspective, I worry that it's just a new hoop to jump through. Using the Platform to transparently fix this problem without requiring developers to change plugin code is very attractive to me because it doesn't disrupt plugin developers. Similarly, I reach for |
@pgayvallet right. |
@restrry Ah, I see! Thanks for explaining. I think I'd like to have the error case clearly defined and understood. If we're only speculating at this point, then I think the pragmatic course would be to ignore this possibility. If we were to update plugin code to be robust in the event of any plugin dependency failure, then I think we'll an introduce a non-trivial amount of complexity into the code (try/catches and fallback UI states if functionality is missing, and maybe some way of communicating the abnormal state to the user). Without a defined error case to reproduce in tests, we also won't have confidence that this code is behaving as expected. Perhaps the simplest solution would then be to say that an unanticipated error in a plugin dependency (or a plugin in general) should be treated as a fatal error by Kibana? |
Only my opinion (fundamentally have no real objections against the
That's not only that. ATM the contracts returned by the plugins are the exact structures passed down to the dependent plugins. Having an interference, even as basic as Also as the dependency is atm returning interface MyPluginSetupDeps {
pluginThatReturnsUndefined: {} // or any, or what?
} I agree that this is like minor, but still.
We're only talking about adding Also, regarding
Platform would very likely have to fix all existing violations if implementing this rule anyway, as PR would not pass CI without these fixes. So it would only 'disrupt' new plugins. |
The Platform will only specify a key in the dependencies object if the plugin is enabled, if the plugin is not enabled the key will not be defined at all. This is a subtle behavior but maybe we can leverage this instead: if ('dependencyPlugin' in plugins) {
// If the key exists in `plugins`, it's enabled.
} Using this condition will work regardless of what the plugin returns (even if // GOOD - using optional keys means the key may not exist at all in `Object.keys(plugins)`
interface MyStartDeps {
dependencyPlugin?: DependencyStart
}
// BAD - because the key may not always exist (in the case of optional deps)
interface MyStartDeps {
dependencyPlugin: DependencyStart | undefined;
} |
Closing as this has become stale, and #69845 (comment) is the recommended way to handle these scenarios at this point. |
Plugins that declare other plugins as dependencies will commonly inspect the
plugins
argument to see if that plugin is enabled or not.However, plugins have the option of returning no value from their lifecycle methods. In this case, the above condition will never evaluate to true because
plugins.dependencyPlugin
isundefined
. Currently, the only way to fix this is to update the dependent to return something from its lifecycle method. See this PR for an example: https://github.com/elastic/kibana/pull/68919/files#diff-70b0ed13bb3e2d531664f49faf0c804dR28.This seems like an anti-pattern to me -- plugins shouldn't be able to "hide" their enabled state by returning undefined from their lifecycle methods. At the same time, it seems like unnecessary boilerplate to put the burden on plugin developers to always return empty objects from their lifecycle methods.
I think the simplest solution would be for the Platform to check if a lifecycle method returns
undefined
, and to provide an empty object to dependents instead. Thoughts?The text was updated successfully, but these errors were encountered: