-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support restarting training job #901
Conversation
@@ -95,6 +99,16 @@ func WaitPIDS(pids []int, opts ...WaitPidsOpts) error { | |||
_, err := os.Stat(path) | |||
if err != nil { | |||
if os.IsNotExist(err) { | |||
if opts[0].CompletedMarkedDirPath != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot figure out how this helps solve the problem, could you please explain more about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR adds echo completed > $mountPath/$$$$.pid
after training command as below, if the training container succeed, it will touch a file named $mountPath/$processID.pid
with "completed"
MetricsCollector container watches the training process, once find the process exit, it will check file $mountPath/$processID.pid
to judge if the training process succeeds or not; if succeed, parse the metrics file; otherwise raise exception to exit.
Before this PR, metrics collector once find training process exit, it starts to parse metrics file, ignoring if training process exit status.
For now, it is hard to check other process exit code (I tries to use "strace" to implement it, but it need more linux capability.) Also we can call k8s api to get pod.status.containerStatus, but we need add extra role to worker pod, or add another service to proxy it.
- args:
- python /mxnet/example/image-classification/train_mnist.py --batch-size=64 --lr=0.02273874688380991
--num-layers=3 --optimizer=sgd 1>/var/log/katib/metrics.log 2>&1 && echo completed
> /var/log/katib/$$$$.pid
command:
- sh
- -c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this cause the similar pipe exit code problem as tee does?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, in fact for tee
, the container exit code is always 0 even training process fails
for this PR:
- if training process fails, it will return training process exit code (
&& echo
will not be executed) - if training process succeeds (exit code is 0),
&& echo
will return 0, too. so the container exit code is 0, too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, misunderstand the logic here, SGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hougangliu The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Fixes: #896
This change is