You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is easily fixed using xarray Dataset.drop_duplicates. Should this be performed as a catching step in the oxygen correction function? Or earlier in processing? Or should some other workaround be used?
The text was updated successfully, but these errors were encountered:
Just after applying the oxygen correction in seaexplorer.py, a call is made to ds.sortby("time"). Perhaps this could be moved forward a few lines and a call to drop_duplicates added?
If fed a pld1 file with duplicate timestamps, the reindex call in utils.oxygen_concentration_correction will fail
ds_temp = data.potential_temperature[~np.isnan(data.potential_temperature)].reindex(time=ds_oxy.time, method="nearest")
Instances of duplicate timestamps happen very rarely. In this snippet of pld file, we see timestamp
11:12:28.730
repeated.This is easily fixed using xarray Dataset.drop_duplicates. Should this be performed as a catching step in the oxygen correction function? Or earlier in processing? Or should some other workaround be used?
The text was updated successfully, but these errors were encountered: