You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, in preprocess_ERA5, the input dataset is already decoded and this does not help in preserving encoding (it will be dropped by the reduction). This means the reduction in this function prevents dtype int in a to_netcdf file already. Therefore, prevent_dtype_int is of no value in this function if this expver dimension is present. These files will then be written unzipped to float32 dtypes. It would be good to make this more robust and transparent.
The text was updated successfully, but these errors were encountered:
This is not an issue anymore with datasets downloaded from the new CDS-beta as the variable dtypes are now zipped float32 instead of scaled int. Therefore this issue is not relevant anymore, only for old datasets.
Furthermore, the prevent_dtype_int() function was also simplified and merged with dfmt.preprocess_ERA5() so this issue is not that relevant anymore. More info in #942
When reducing on non-decoded dataset (
decode_cf=False
upon opening), this preserves the encoding attrs:However, in
preprocess_ERA5
, the input dataset is already decoded and this does not help in preserving encoding (it will be dropped by the reduction). This means the reduction in this function prevents dtype int in ato_netcdf
file already. Therefore,prevent_dtype_int
is of no value in this function if this expver dimension is present. These files will then be written unzipped to float32 dtypes. It would be good to make this more robust and transparent.The text was updated successfully, but these errors were encountered: