-
Notifications
You must be signed in to change notification settings - Fork 192
Therubyracer isn't thread-safe in production? #270
Comments
Hi, @Galmido, just to clarify if this is a regression in the new version that was recently released, did you experience this issue with therubyracer 0.11, too? |
Yes, problem occurs with 0.11 too. But, with 0.11 it seems that every server worker can manage much more requests before hangups. With 0.12, single worker with 16 threads can respond to 100-500 requests before lock. It seems that number of threads in worker doesn't matter. 0.11 can respond to around 1000-2000 requests before lock. Actually, I find a way to resolve the problem, but I still think that this is a bug. V8 doesn't hangs up when running directly in worker thread, and all of the code running under separate mutex. Additionally, directly after JS eval, context object must be disposed, otherwise thread can still hang, but this is very rarely. I ran 1 hour stress-test with complex view, evaluating 5 different doT.js templates and it seems, that this resolves the problem completely and managed to get 100% successful responses. |
The Ruby Racer is thread safe, but it is not guranteed deadlock free. Like Ruby, V8 also has a global interpreter lock, and so only one thread can be executing code inside the V8 VM at a time. The answer is, I think, to release the GIL where possible when entering the V8 VM, and then reacquire it if and when V8 calls back into Ruby. This may be incompatible with the current strategy of acquiring the V8 lock from within Ruby code. |
I have a similar problem, here the gist with sourcecode: It freezes after 10-50 requests, when benchmarked with wrk. I have tried: V8::C::Locker() do
V8::C::HandleScope() do
@cxt = V8::C::Context::New()
begin
@cxt.Enter()
yield
ensure
@cxt.Exit()
end
end
end and also |
I think this could be a locking issue with the order in which the various VM locks are acquired and released. Currently, to call JS code, the sequence is: Lock(V8) -> Unlock(Ruby) -> Call(JS) If there is a context switch after V8 is locked, but before Ruby is unlocked, then it could result in deadlock. Instead, it really ought to be: Unlock(Ruby) -> Lock(V8) -> Call(JS) The situation is worse for calling back out to Ruby from JavaScript. It looks like this currently: Lock(V8) -> Unlock(Ruby) -> Call(JS) -> Lock(Ruby) -> Unlock(V8) -> Call(Ruby) and should be: Lock(V8) -> Unlock(Ruby) -> Call(JS) -> Unlock(V8) -> Lock(Ruby) -> Call(Ruby) The reason for this is that for code like: V8::C::Locker() do For this to work, V8 is locked from Ruby, so Ruby must needs be locked at the same time. To fix this it means moving all of the locking code out of Ruby and into C++ I have a similar problem, here the gist with sourcecode: It freezes after 10-50 requests, when benchmarked with wrk. I have tried: V8::C::Locker() do — |
I was having the same problem. My shoryuken processes were getting stuck when the number of simultaneously active threads was high. My solution was a singleton class + a mutex to process our JavaScript. I'm no longer getting my processes stuck and I didn't notice any performance difference. any performance difference is probably because what @cowboyd mentioned before:
class JavaScript
include Singleton
def initialize
@mutex = Mutex.new
end
def transform(source, payload, timeout = 5000)
@mutex.synchronize do
transform!(source, payload, timeout)
end
end
private
def transform!(source, payload, timeout)
context = V8::Context.new timeout: timeout
# ...
end
end
# sample test:
# N.times.map do
# Thread.new do
# JavaScript.instance.transform('...', '...')
# end
# end.each(&:join) @cowboyd any ideas why did that work as therubyracer is thread safe and it has a GIL? |
This will prevent deadlock in libV8, see also: rubyjs/therubyracer#270
same here when only using |
related to #432? |
Hi,
I use therubyracer to evaluate some JS code (render template with doT.js in fact). In development environment it work's without any problems, tested with multithreaded Thin, multithreaded Puma and WEBRick. No deadlocks/hangups.
But, in production mode, therubyracer starts to deadlocks all server threads. With 8-core CPU with 32GB of RAM and 16 server threads, Puma and Thin manage to serve 16 concurrent connections for about 5 seconds, and after that all threads starts to hangup. Server stops responding to any requests or signals.
I tried to put whole therubyracer code into separate thread, using mutex, but without luck. All server threads stops responding at random moments, before or after executing therubyracer code, sometimes even can lock in middle of
puts
:)Ruby - 2.0.0-p195 x64
Rails - 4.0.0
therubyracer - 0.12.0
libv8 - 3.16.14.3 x86_64-linux
Problem occurs in any multithreaded server. Turning off multithreading and switch to multiprocessing resolves the problem completely.
The text was updated successfully, but these errors were encountered: