Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--kill-others-on-fail flag not working as expected with --group --max-processes 1 #433

Closed
daniil4udo opened this issue Jul 9, 2023 · 6 comments · Fixed by #460
Closed
Labels

Comments

@daniil4udo
Copy link

Description

When trying to run multiple scripts sequentially using concurrently, I noticed that the --kill-others-on-fail flag is not behaving as expected when used together with the --group --max-processes 1 flags, rest of the scripts still running.

Expected Behavior

If one script fails (exits with a non-zero code), all running processes should be terminated when using the --kill-others-on-fail flag, even when running scripts sequentially with --group --max-processes 1.

Current Behavior

However, when I use the --kill-others-on-fail flag in conjunction with --group --max-processes 1, concurrently is not terminating all running processes when one script fails. This behaviour is not observed when I remove --group --max-processes 1 from the command.

@gustavohenke
Copy link
Member

Thanks for the report! I think this use case makes sense.

There's a potentially funny interaction with --restart-tries here. The first command will restart N times before giving up, but should other commands be given a chance to spawn at all?

@bozdoz
Copy link

bozdoz commented Dec 7, 2023

How about a --kill-others-on-sigint? I'm not sure I understand the interaction with "restart" here. If the user cancels the process, I'd expect the main process (concurrently) to stop the subprocesses.

Don't the restart flags default to 0?

@gustavohenke
Copy link
Member

Don't the restart flags default to 0?

Yes.

The issue I see starts like this:

concurrently --kill-others-on-fail --max-processes 1 "sleep 5 && echo foo" "exit 1"
  • Current behaviour is that command 1 exits, and a SIGTERM (from --kill-others-on-fail) is sent to nowhere, so command 0 still runs
  • New behaviour would be that command 0 shouldn't be given a chance to run, since it was meant to be killed anyways.

But once you add --restart-tries 1, then it's fair that command 0 gets a chance to run... or not really?

@bozdoz
Copy link

bozdoz commented Dec 7, 2023

Should restart give the commands a chance to run? No. Not on SIGINT. The user is aborting the process.

Thinking of docker restart policies:

  • on-failure
  • always
  • unless-stopped

None of these restart when the container is manually stopped: even "always" explicitly doesn't include if it's manually stopped.

@gustavohenke
Copy link
Member

Yeah, SIGINT has a quite clear use case.

So, --restart-tries is documented as "restart processes that died". If there's a limited concurrency, then not all commands will be running at once, so they should probably not be given a chance to spawn at all when --kill-others/--kill-others-on-fail is set.

I think this makes sense. WDYT?

@gustavohenke
Copy link
Member

🚢 This is now fixed in v9.0.0!
https://github.com/open-cli-tools/concurrently/releases/tag/v9.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants