Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mnification in parallel execution #894

Closed
corporateuser opened this issue Oct 30, 2023 · 4 comments · Fixed by SAP/ui5-builder#953
Closed

Mnification in parallel execution #894

corporateuser opened this issue Oct 30, 2023 · 4 comments · Fixed by SAP/ui5-builder#953
Assignees
Labels
bug Something isn't working

Comments

@corporateuser
Copy link

Expected Behavior

I should be able to run multiple graph.build in parallel with async functions. This is extremely useful for CAP projects where we have multiple independent UI5 applications.

Current Behavior

pool is a singleton variable in node_modules/@ui5/builder/lib/processors/minifier.js and also node_modules/@ui5/builder/lib/tasks/buildThemes.js. Function getPool causes cleanup task to be registered only for the very first task (application) created. All other tasks will re-use the same pool and no new cleanup tasks will be created. This causes all the pending tasks to be canceled when the first task finishes and also if some of the tasks are long running (minification of a very big file), then this will be canceled in 1000 ms as well, this is the default value.

Example of the error:

[error] ProjectBuilder: Build failed in 13 s
[info] ProjectBuilder: Executing cleanup tasks...
[error] ProjectBuilder: Build failed in 13 s
[info] ProjectBuilder: Executing cleanup tasks...
[error] ProjectBuilder: Build failed in 13 s
[info] ProjectBuilder: Executing cleanup tasks...
[error] ProjectBuilder: Build failed in 13 s
[info] ProjectBuilder: Executing cleanup tasks...
[error] ProjectBuilder: Build failed in 13 s
[info] ProjectBuilder: Executing cleanup tasks...
Finished building /.../app/dataprivileges/ui5.yaml
Error: Pool terminated
    at /.../node_modules/workerpool/src/Pool.js:327:26
    at Array.forEach (<anonymous>)
    at Pool.terminate (/.../node_modules/workerpool/src/Pool.js:326:14)
    at file:///.../node_modules/@ui5/builder/lib/processors/minifier.js:36:23
    at file:///.../node_modules/@ui5/project/lib/build/helpers/ProjectBuildContext.js:52:11
    at Array.map (<anonymous>)
    at ProjectBuildContext.executeCleanupTasks (file:///.../node_modules/@ui5/project/lib/build/helpers/ProjectBuildContext.js:51:42)
    at file:///.../node_modules/@ui5/project/lib/build/helpers/BuildContext.js:85:15
    at Array.map (<anonymous>)
    at BuildContext.executeCleanupTasks (file:///.../node_modules/@ui5/project/lib/build/helpers/BuildContext.js:84:48)

Quick but inefficient way to fix this is to modify cleanup task as following:

taskUtil.registerCleanupTask(() => {
  let stats = pool.stats();
  while (stats.pendingTasks > 0 && stats.activeTasks > 0) {
    stats = pool.stats();
  }
  log.verbose(`Terminating workerpool`);
  const poolToBeTerminated = pool;
  pool = null;
  poolToBeTerminated.terminate();
});

Usage of a common pool between different executions is actually a good idea, as if we'll create a new pool per every graph.build we'll quickly deplete server resources.

Steps to Reproduce the Issue

Create several independent fiori/ui5 applications
Try to do parallel build with:

ui5Apps = ['app/app1/ui5.yaml','...','app/app30/ui5.yaml'];
await Promise.all(
  ui5Apps.map((ui) => buildUi(ui))
);

Where build is

async function buildUi(ui) {
  const folderName = path.dirname(ui);
  let graph = await graphFromPackageDependencies({
    cwd: folderName,
    rootConfigPath: `./ui5.yaml`,
  });
  await graph.build({
    destPath: `app/dist/resources`,
    cleanDest: false,
    includedTasks: ['generateCachebusterInfo'],
    excludedTasks: ['replaceVersion'],
  });
}

Context

  • UI5 Module Version (output of ui5 --version when using the CLI): 3.7.1
@corporateuser corporateuser added bug Something isn't working needs triage Needs to be investigated and confirmed as a valid issue that is not a duplicate labels Oct 30, 2023
@RandomByte RandomByte self-assigned this Oct 30, 2023
@RandomByte
Copy link
Member

Hey @corporateuser, thanks for raising this issue.

This is great input for us since we plan to refactor the use of workers in UI5 Tooling to allow for better reuse across tasks (and custom tasks). This issue should be easy to solve as part of that. However, I can't give you a timeline for when this would happen.

Until then I don't see a good solution that we could apply right away. The fix you proposed might solve the immediate issue by relying on the workerpool stats but I would rather prefer a solution in the UI5 Tooling world. For example by keeping track of how many builds have been started and checking for every termination whether that is the last currently running build, only terminating once that has finished. This I would see as part of the mentioned refactoring.

Maybe we can find a workaround for your scenario until we have applied the necessary changes:

  • Have you considered building the projects in sequence? How much time does this add to the build?
  • Have you considered executing the builds in separate node processes? I.e. one process per build. I would expect this to result in an overall shorter build time.

this will be canceled in 1000 ms as well, this is the default value.

We were not aware of this! We should definitely increase this timeout, since minification can easily take longer than that. Thanks!

@RandomByte RandomByte removed the needs triage Needs to be investigated and confirmed as a valid issue that is not a duplicate label Oct 31, 2023
@d3xter666
Copy link
Contributor

d3xter666 commented Nov 7, 2023

Just one clarification here after debugging the workerpool code so far. A timeout termination will not happen.
The timeout mentioned above is for the workerTerminateTimeout property which IMO is a bit confusing. It does not terminate the task(s) after X time. It cleans up after termination. Please check the description for it

workerTerminateTimeout: number. The timeout in milliseconds to wait for a worker to cleanup it's resources on termination before stopping it forcefully. Default value is 1000.
Source: https://www.npmjs.com/package/workerpool#pool

The core issue here is a race condition where the termination is exlicitly invoked by our code in the cleanup task.

@corporateuser
Copy link
Author

corporateuser commented Nov 13, 2023

Hello,

Have you considered building the projects in sequence? How much time does this add to the build?

This is exactly we're doing now, we've done recently some optimization, so now it takes ~1-2 minutes, at time when I've created the issue it was ~4-5 minutes.

Have you considered executing the builds in separate node processes? I.e. one process per build. I would expect this to result in an overall shorter build time.

We've tried it long time ago when we've had less application and overhead was more than starting subsequently, currently it should be other way around. We'll try this.

Please check the description for it

to my understanding before stopping it forcefully means cancellation of the running task, but I've not dug into workerpool code deep enough. So maybe it's just misunderstanding of what forcefully means here. Anyway calling terminate on a pool which is having a running task does not sounds right.

d3xter666 added a commit to SAP/ui5-builder that referenced this issue Nov 17, 2023
…953)

fixes: SAP/ui5-tooling#894
JIRA: CPOUI5FOUNDATION-751
depends on: SAP/ui5-project#677

Workerpool needs to wait for all the active tasks to complete before
terminating.

---------

Co-authored-by: Merlin Beutlberger <m.beutlberger@sap.com>
@d3xter666 d3xter666 self-assigned this Nov 17, 2023
@d3xter666
Copy link
Contributor

Hi @corporateuser ,

The fix is available with the latest @ui5/cli@3.7.2 & @ui5/project@3.8.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants