-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mnification in parallel execution #894
Comments
Hey @corporateuser, thanks for raising this issue. This is great input for us since we plan to refactor the use of workers in UI5 Tooling to allow for better reuse across tasks (and custom tasks). This issue should be easy to solve as part of that. However, I can't give you a timeline for when this would happen. Until then I don't see a good solution that we could apply right away. The fix you proposed might solve the immediate issue by relying on the workerpool stats but I would rather prefer a solution in the UI5 Tooling world. For example by keeping track of how many builds have been started and checking for every termination whether that is the last currently running build, only terminating once that has finished. This I would see as part of the mentioned refactoring. Maybe we can find a workaround for your scenario until we have applied the necessary changes:
We were not aware of this! We should definitely increase this timeout, since minification can easily take longer than that. Thanks! |
Just one clarification here after debugging the
The core issue here is a race condition where the termination is exlicitly invoked by our code in the cleanup task. |
Hello,
This is exactly we're doing now, we've done recently some optimization, so now it takes ~1-2 minutes, at time when I've created the issue it was ~4-5 minutes.
We've tried it long time ago when we've had less application and overhead was more than starting subsequently, currently it should be other way around. We'll try this.
to my understanding before stopping it forcefully means cancellation of the running task, but I've not dug into |
…953) fixes: SAP/ui5-tooling#894 JIRA: CPOUI5FOUNDATION-751 depends on: SAP/ui5-project#677 Workerpool needs to wait for all the active tasks to complete before terminating. --------- Co-authored-by: Merlin Beutlberger <m.beutlberger@sap.com>
Hi @corporateuser , The fix is available with the latest @ui5/cli@3.7.2 & @ui5/project@3.8.0 |
Expected Behavior
I should be able to run multiple
graph.build
in parallel with async functions. This is extremely useful for CAP projects where we have multiple independent UI5 applications.Current Behavior
pool
is a singleton variable innode_modules/@ui5/builder/lib/processors/minifier.js
and alsonode_modules/@ui5/builder/lib/tasks/buildThemes.js
. FunctiongetPool
causes cleanup task to be registered only for the very first task (application) created. All other tasks will re-use the same pool and no new cleanup tasks will be created. This causes all the pending tasks to be canceled when the first task finishes and also if some of the tasks are long running (minification of a very big file), then this will be canceled in 1000 ms as well, this is the default value.Example of the error:
Quick but inefficient way to fix this is to modify cleanup task as following:
Usage of a common pool between different executions is actually a good idea, as if we'll create a new pool per every
graph.build
we'll quickly deplete server resources.Steps to Reproduce the Issue
Create several independent fiori/ui5 applications
Try to do parallel build with:
Where build is
Context
ui5 --version
when using the CLI):3.7.1
The text was updated successfully, but these errors were encountered: