-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Colab: Training steps / epochs not appearing on log, only D_0.pth and G_0.pth being created #321
Comments
#317 made this bug |
How do I revert to like... a previous version or something (in colab)? |
Probably it is working but just not logging, probably |
That's what I begin to think to but it's still a bummer since Its useful to
know if it's progresses or not.
…On Fri, Apr 14, 2023, 8:32 AM 34j ***@***.***> wrote:
Probably it is working but simply not logging, probably
—
Reply to this email directly, view it on GitHub
<#321 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3WEKEZ2WFXN233QHTG7S73XBDVODANCNFSM6AAAAAAW5WIJ64>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Nope. It's not working. |
u can install 33180e9 branch to revert to v3.5.0 |
Okay thanks that seems to kind of work but it still doesn't save checkpoints when I need to. Usually stopping the cell works just fine, 3.5 gives an eror and starts over every time. The end step seems to be set to 9999. Is there a way I can set it to something like 2500 instead? |
The logic is inverted. It uses the RichProgressBar if it is a notebook, not if it isn't |
What do you mean? Can you explain me how to solve it pls? |
Okay so, that code snippet checks if it's running in Colab. If it is, it will use the fancier progress bar. However, this progress bar does not seem to be available inside Colab so 34j made this method to check. The logic for it is the wrong way around though - instead of using the old / default progress bar in Colab when it detects it's running in Colab, it is using the fancy one that's not available. The fix would be to do I noticed this issue locally when the fancy progress bar wasn't available after an update. If 34j (or someone else) won't be getting to a pull request I can do one once I'm back at my computer in around an hour |
Thank you!! ^^ |
I'm getting the same error @outhipped had right now when I try to train on colab. Switching to the 33180e9 branch works and allows me to train. |
Make sure you are on the newest version by running the Also, could you please open a new issue with the logs / errors for better visibility and handling of the repo? Cheers 🙏 |
@allcontributors add outhipped bug |
I've put up a pull request to add @outhipped! 🎉 |
Describe the bug
Ttarining step does not work. The log shows:
2023-04-13 22:53:18.009147: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-13 22:53:19.311998: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[22:53:20] INFO [22:53:20] NumExpr defaulting to 2 threads.
To Reproduce
From step "Automatic preprocessing" onwards the log finishes at "NumExpr defaulting to 2 threads." message.
Additional context
Stoped working 2 days ago.
The text was updated successfully, but these errors were encountered: