-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'SingleUserLabApp' object has no attribute 'io_loop' #609
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
I see that your stack trace has the error in jupyter_server/jupyter_server/serverapp.py Lines 2671 to 2682 in a1b013e
So please consider updating to use jupyter-serer and then re-run the stress tests using that 😃 |
Gah you're right:
We have migrated to using the jupyterlab UI and have jupyterlab and all that installed in our singleuser-server images but apparently the hub isn't using it to spawn those. 🤦 Thanks for looking at this, I'll close it and report back on anything I find from scale testing when we switch to actually using jupyter-server 😅 . |
Description
I'm doing some stress and scale testing on a testing environment jupyterhub deployment using zero-to-jupyterhub-k8s 1.2.0 with jupyterhub 1.5.0, kubespawner 1.1.2 and jupyter-server 1.11.2:
We build our own user image based on
jupyter/scipy-notebook:837c0c870545
from docker-stacks and then with some additional extensions and updates (like jupyterhub 1.5.0).I was running a scale up test using hub-stress-test to 1000 singleuser-server pods (these are basically micro pods just to stress the hub and proxy [core pods], we care less about the actual notebook servers doing anything).
In 1 out of 1000 notebook servers there was a failure to spawn. In the hub log I noticed this:
Looking in the notebook server pod logs for that pod I see this:
I'm not sure what's going on there, maybe a race condition while the pod is starting up? Or maybe because it was taking a long time to start something started failing in weird ways? Here is a gist of the notebook app logs:
https://gist.github.com/mriedem/384ff1578aca0163e743fe4ddea176a7
We can see that it's 34 seconds from the time the app is starting up to the time that error happens. I'm wondering if maybe we hit this and the hub/kubespawner killed the pod?
https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#jupyterhub.spawner.Spawner.http_timeout
Actually digging more into the hub logs it looks like yes the hub hits that
http_timeout
and then kills the pod, here are the hub logs scoped for that pod:https://gist.github.com/mriedem/8d226d7f934ae27e71a35ef6ba8a13ec
So I guess the issue is just there is an ugly AttributeError in the notebook app / jupyter-server logs when there is a race on startup where the pod is killed during startup and things aren't all setup. I guess that's probably hard to predict and account for though so probably a low priority issue to resolve.
Reproduce
Hard to reproduce. I've been running scale tests all day and have hit maybe a couple of the http_timeouts in the hub logs but only this one AttributeError in the notebook server app logs.
Expected behavior
Not to see AttributeError red herring-like errors in the app logs.
Context
The text was updated successfully, but these errors were encountered: