You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for this project but I am facing some issues while running the code for Boolean Question Generation:
/home/codespace/.python/current/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:163: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with truncation is True.
Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.
If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with model_max_length or pass max_length when encoding/padding.
To avoid this warning, please instantiate this tokenizer with model_max_length set to your preferred value.
warnings.warn(
Canceled future for execute_request message before replies were done
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
The text was updated successfully, but these errors were encountered:
Hi, thank you for this project but I am facing some issues while running the code for Boolean Question Generation:
/home/codespace/.python/current/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:163: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with
truncation is True
.model_max_length
or passmax_length
when encoding/padding.model_max_length
set to your preferred value.warnings.warn(
Canceled future for execute_request message before replies were done
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
The text was updated successfully, but these errors were encountered: