-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rare panic(s) in goja #867
Comments
I am also seeing this and can reproduce it quite well if I set my stages configuration to ramp up and down very quickly:
I am running this on an aws ec2 c4.2xlarge in docker on image https://hub.docker.com/layers/loadimpact/k6/latest/images/sha256-da2f4b158e88eb9e0a118d973df31315cb4496e4fbbd27fa9876bb8414e2b9a7
|
@tateexon, thanks for the information and sorry for the inconvenience. We forgot to update this issue, but we actually realized that the cause of these panics is the current scheduling and reuse of VUs. As you've pointed out, it can be triggered by rapid ramping up and down of VUs, and we've also seen the panics manifest when the k6 REST API is used to set the number of VUs directly. In these situation, k6 sometimes doesn't properly wait for the execution in a VU to finish before it tries to reuse it again, thus executing multiple things in the same JS runtime and causing a panic. Due to the heavy refactoring needed to properly fix the underlying cause, we chose not to directly fix it with the current scheduler, and instead fixed it as part of #1007, which involved rewriting the execution scheduling code anyway. We plan to finish that PR and merge it in |
@na-- Thank you very much for the update. Glad the issue has been triaged and a way forward is known! For now I have just increased the number of containers/ec2s running k6 to get the same ramp up/down without overloading any single k6 instance. I look forward to the fix in the future! |
This should fix #867. This is not ... a great fix in the sense that while doing it I found out we are also asynchrnously touch VU.Runtime which should also not happen and the currently added mutex should probably be locked in most other functions, but this should fix the immediate problems and the other should be fixed in a more ... complete redesign :)
This should fix #867. This is not ... a great fix in the sense that while doing it I found out we are also asynchrnously touch VU.Runtime which should also not happen and the currently added mutex should probably be locked in most other functions, but this should fix the immediate problems and the other should be fixed in a more ... complete redesign :)
This should fix #867. This is not ... a great fix in the sense that while doing it I found out we are also asynchrnously touch VU.Runtime which should also not happen and the currently added mutex should probably be locked in most other functions, but this should fix the immediate problems and the other should be fixed in a more ... complete redesign :)
Very rarely, k6 panics for an unknown reason, deep in the goja runtime. The cause of the panics seems to be that we've somehow concurrently executed 2 different things in the same goja runtime and one of them overwrote the stack of the other... Or maybe it's just a bug in goja?
Stack traces that we've seen (likely the same issue, but maybe not):
and
The text was updated successfully, but these errors were encountered: