-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
notification like "Failed launch debugger for child process xxxx". #712
Comments
I still have to investigate this, but as a note, creating 1000 processes and subsequently 1000 debugs on VSCode may be stretching the limits (my guess is that the computer is getting too much to do and the timeouts end up reaching their limits). Do you get this in a real-world use case? |
I use this code to do some heavy work. And I think function Pool can't make 1000 processes, process number is related to cpu core. Python multiprocess module really attemp to creating 1000 processes? import multiprocessing as mp
from os import getpid
def func(idx):
# record process id, just some file created
obj = open('{}.log'.format(getpid()), 'w')
obj.close()
return idx+1
class myClass:
def __init__(self):
self.data = []
def test(self):
with mp.Pool(mp.cpu_count()) as mypool:
ans = mypool.map(func, range(100))
self.data = ans
a = myClass()
a.test() |
You're right, it should cap on your cpu count for simultaneous processes (then it'll start to reuse processes)... If you use less cores so that it doesn't stay at 100% cpu utilization for all cores, does it still have that issue for you (say use cpu_count()/2)? |
I follow your advice, notification still happen. This situation just happen while sub process end too qucikly. import multiprocessing as mp
from os import getpid
def initFunc():
obj = open('{}.log'.format(getpid()), 'w')
obj.close()
def func(idx):
return idx+1
class myClass:
def __init__(self):
self.data = []
def test(self):
with mp.Pool(int(mp.cpu_count()/2), initFunc) as mypool:
# show notification
ans = mypool.map(func, range(10000))
# no notification
# ans = mypool.map(func, range(100000000))
self.data = ans
a = myClass()
a.test() |
Does it make any difference if you do |
The subprocesses should be paused until a client connects to them. But, for some reason, they do indeed exit early - in fact, most of them exit before the client even gets to request attach! It's not clear to me why this is happening. The logs don't show anything unusual. Also, so far as I can tell, this repros on Linux (regardless of start method), but not on Windows. |
|
Can you clarify? It still looks like a bug to me in my local repros. When not debugging, everything works when expected. But when debugging, only a few processes spawn successfully; the rest exit before they even get to (My suspicion is that this has something to do with |
The last comment is confusing and not verified, just forget it. |
I just investigated this... apparently, adding a bit of timeout before the pool is finished makes the error go away
Alternatively, just making more happen in the pool (say, change So, what seems to be happening is:
As a note, the execution seems to work fine (i.e.: the Given that this is in the connection management layer ( |
Thing is, child processes aren't supposed to start running anything until there's a debugger connection to them that has gone through the entire initialization stage (otherwise breakpoints might be skipped etc). So either the pool killing processes due to some kind of timeout, before they even had a chance to run anything; or there's some bug in how subprocesses are resumed. |
We're on the same page there.
Exactly, but not due to some timeout and as its regular operation when everything scheduled to run already finished. i.e.: The issue is that the multiprocessing pool would start 8 processes, starts 4 and starts sending the methods to be executed while the other ones are still being initialized, then, all the methods end up finishing in those first 4 processes and the remaining 4 which weren't ready to run are killed when the multiprocessing pool context manager exits in that example (while the debugger is still in the connection phase for those processes). |
I also encountered this problem, but my problem is, I hit a breakpoint in the program, but the program has returned before the breakpoint, so this problem occurs. |
I just encountered this as well, 3/4 of the way through a machine learning task vscode started displaying message boxes "Failed to launch debugger for child process" and the main script terminated with no exceptions printed to the console. |
Note that the in the original issue, the computation itself completes successfully with the expected result - the error messages are essentially spurious (they're technically correct, just irrelevant). If you're actually seeing different results with and without debugger, can you please file a separate issue? |
I am having the same issue and for me it is very invasive since the |
Similar problem, mine is "invalid message session is already started" |
How to fix it? |
I encounter the situation like #303 issue about multiple process debug, vscode pop up notification like "Failed launch debugger for child process xxxx".Sometimes, debugger can't acquire call stack info itself.
log.zip
Originally posted by @liheyi360 in #709 (comment)
The text was updated successfully, but these errors were encountered: