-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
workflow for server development #25
Comments
Yes, this was already requested some time ago; I rejected it at the time. I think I've changed my mind, now, this would be a good feature. I just don't really have to time to develop it. Leaving open so someone can look at it if they want. |
@passcod I can try to implement this but can you give me some advice as to how to stop an already running run? |
Kill the child processes using this: http://doc.rust-lang.org/std/process/struct.Child.html#method.kill But there might be a better way to do things :) |
@ivegotasthma Are you coding on it right now? I need that feature, too. So I would start implementing it now. But I don't want to steal your job here ;) |
I am working on it but I don't have experience with synchronization On 29-01-16 14:37, Lukas Kalbertodt wrote:
|
I just saw your commit. Looks like it could already work. Does it work already? |
It doesn't work because I need a RwLock around the Config. When a new Thanks for the feedback, it's very welcome. :) On 29-01-16 14:44, Lukas Kalbertodt wrote:
|
As it turns out, it's pretty difficult. This issue and #2 are not easily solvable. The reason for this is the API of I see a few possibilities here:
I hope everything is understandable. I wanted to ask everyone involved, what you think of it. Personally, I would choose the first option because it's the easiest and the busy-timeout-waiting probably won't cause any significant CPU load. Fourth option is worth thinking about, too, IMO. What do you think? |
Did some quick profiling: fn main() {
let ns: u32 = std::env::args().nth(1).unwrap().parse().unwrap();
let dur = std::time::Duration::new(0, ns);
loop {
std::thread::sleep(dur);
}
} I don't even know if this is comparable with the If we decide for (1), I would do some more profiling, of course. |
(2) is undesirable, I agree. (3) is obviously not a solution. (4) is interesting, but also platform dependent. (1) seems the best out of those 5. After reading some more on the issue in the context of Rust, (1) seems like our only sane option until non-blocking process status checking gets in Rust. Someday. If you're going to do some more profiling, could you check this scenario? Pseudocode:
That is, what happens if instead of letting |
Haven't found the RFC-issue, thanks for that. And that is a good idea. We can compare the differences between letting sleep and sleep ourselves. Although, I honestly don't expect any differences. Is it decided then? :P So I will start implement it with |
It is indeed decided :) |
Implemented in v3.1.0. I've tested it by using |
Nevermind, 3.1.0 doesn't work for this. It seems to try to kill the child process, but fails somehow, and then assumes the process was killed so it tries to start further I've tried:
Annoyingly, sending a signal (SIGKILL, SIGTERM, SIGINT…) with the command-line to the child process directly works. |
Wow, this is strange. I thought that the process was killed, but the port wasn't freed yet. But indeed, it looks like the process is still running. I just checked: The documentation says that the method should only be used with great care. Maybe it somehow disturbs the But I was actually pretty surprised that you merged my PR already, since you had a few remarks. First sending a signal like SIGTERM or SIGINT could be nice, but I'd rather avoid unsafe code. And I did not understand your comment completely: of all the things you tried, nothing worked? What happened when calling |
I had some comments, but realised I could merge and implement them myself, no need to drag it on :) So the current release sends a SIGTERM if on Unix, and a normal kill (I suppose? Not sure how it works) on Windows. Of all the things I tried and listed, nothing worked indeed. Same behaviour every time. Cargo watch goes to kill the child process, thinks it got it, and proceeds to run the next command. In the meantime, the child has not been killed, hence the problem. I haven't tested any of this on Windows, so I don't know if it works there. |
One thing I found is that one can use |
Going to bed nowish, I'll pick this up again tomorrow. Feel free to experiment! :) |
The docs say that And yeah sure no hurries. Time zones make open source more difficult :P |
I made a commit after merging that uses libc on Unix to send SIGTERM to the process instead of using |
Ok so I guess the problem is that Command::new(...)./* args...*/.status() So similar to our code (= no magic involved). Furthermore I just tested and verified with a small example: if process A starts process B then waits for it (with I'm not yet sure how to correctly solve this problem. Maybe killing a whole process tree or something. Will look into that soon. |
Ah! Yes, that would explain it. Well done figuring it out. |
The caveat in wait-timeout is mainly just indicating that once a process has been successfully waited on (either through that crate or the standard library), you can no longer consider the pid valid to do further operations like wait, kill, etc. The behavior you may be seeing here is that Cargo spawns subprocesses which may not be getting killed. When you kill the parent it doesn't automatically kill the children, so they'll still be running around. Although after reading this thread it's not clear to me what the failure mode is, so this may not be the case either. |
Yes, that is the behaviour we're seeing. Now we just want to kill the process tree. One thing I have found, but that may be overkill, is to create a subprocess that uses Alternatively, the command-line utility |
I'm wondering how Ctrl-C works, as it obviously kills the entire thing, it doesn't leave a dangling process. Sending SIGINT to the cargo run process didn't seem to work. Maybe we ought to discuss with Cargo people. |
Hi all I wrote a small script for this as I needed it myself (did not know about cargo-watch at the time). Until it has been solved in cargo-watch feel free to use my script. My solution to the issue is to pass the target binary as an argument to the script thereby sidestepping cargo run in this particular case. Hope it helps and hopefully we get support for this in cargo-watch soon. https://users.rust-lang.org/t/small-script-to-watch-and-re-launch-rust-code |
Interesting, if I find the need to switch I will look into it, currently I have stuffed setup using my script but thanks for telling me about watchexec =) |
This will be the next feature added. Don't have an ETA, but it's on top of the todolist. |
Would duct be able to help here? |
Oh that is perfect! I was just thinking something like that would be needed, but I wasn't looking forward to implementing it myself. That really helps. I'll work on replacing all the executy bits in Cargo Watch with duct. Thanks @bb010g! |
Preliminary testing is promising. Barring any big surprises, support for this will (finally!) land in the next release, within a few days. |
There's new support for this in version 4.0.0 which I just released. |
Hi @passcod , just had some time to do some basic testing but I can’t get it to work, maybe I'm missing something obvious… I’m not sure where the problem is. here is what I can observe with this small project: // main.rs
#![feature(plugin)]
#![plugin(rocket_codegen)]
extern crate rocket;
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
fn main() {
rocket::ignite().mount("/", routes![index]).launch();
} # Cargo.toml
[package]
name = "hello-rocket"
version = "0.1.0"
authors = ["Mathieu Rochette <mathieu@texthtml.net>"]
[dependencies]
rocket = "0.2.3"
rocket_codegen = "0.2.3"
http://localhost:8000/ now prints "Hello, world!" I replace
but http://localhost:8000/ still prints "Hello, world!" :( I replace
http://localhost:8000/ still prints "Hello, world!" :(
I replace
cargo-watch now exits and leaves behind the last 2 instances:
I hope it this can be helpful to fix it, please tell me if I can do more debugging ps:
|
Hmm. When killing:
|
Opened an issue on Duct for this. |
I've written a new version that calls Check it out: https://github.com/passcod/cargo-watch/tree/just-wrap-watchexec
And if you don't have it already, you'll also need to |
hey @passcod just tried with the
|
Probably help debugging (see comments on #25)
Run with |
not silly at all, that was it ! RUST_LOG=cargo_watch=debug cargo watch -x run
|
Ah, that might be an inotify or similar watch limit issue. If you are on linux see: https://github.com/passcod/cargo-watch#linux-if-it-fails-to-watch-some-deep-directories-but-not-others And see this for the upstream issue: notify-rs/notify#103 |
When I run |
Yes, if you use this version: https://github.com/passcod/cargo-watch/tree/just-wrap-watchexec it will literally just wrap watchexec and do exactly that. You're using the v4 version (from the master branch) that does not have a fix for this issue (yet). |
|
@Boscop please ask watchexec questions on the watchexec repo :) and if you want to ask workspaces-related questions using cargo-watch, then open a new issue, thanks. Also check out the Troubleshooting section on the README. |
Finally released properly in 5.0.0. |
when building a server (eg: a webserver that listen on port 3000), it could be useful to automatically kill the already running server (after successful re-compilation) before starting the new one. otherwise, the server cannot start again because the old one still listen to the same port.
here is a log of what is going, if the server is not killed:
The text was updated successfully, but these errors were encountered: