-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix handling of corrupt files + proper offset handling for hdf5 files #552
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
re pull request #405 re pull request #446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request #446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
DennisHeimbigner
changed the title
H5offset.dmh
Fix handling of corrupt files + proper offset handling for hdf5 files
Oct 28, 2017
netbsd-srcmastr
pushed a commit
to NetBSD/pkgsrc
that referenced
this pull request
May 16, 2018
Upstream changes: ## 4.6.1 - March 15, 2018 * [Bug Fix] Corrected an issue which could result in a dap4 failure. See [Github #888](Unidata/netcdf-c#888) for more information. * [Bug Fix][Enhancement] Allow `nccopy` to control output filter suppresion. See [Github #894](Unidata/netcdf-c#894) for more information. * [Enhancement] Reverted some new behaviors that, while in line with the netCDF specification, broke existing workflows. See [Github #843](Unidata/netcdf-c#843) for more information. * [Bug Fix] Improved support for CRT builds with Visual Studio, improves zlib detection in hdf5 library. See [Github #853](Unidata/netcdf-c#853) for more information. * [Enhancement][Internal] Moved HDF4 into a distinct dispatch layer. See [Github #849](Unidata/netcdf-c#849) for more information. ## 4.6.0 - January 24, 2018 * [Enhancement] Full support for using HDF5 dynamic filters, both for reading and writing. See the file docs/filters.md. * [Enhancement] Added an option to enable strict null-byte padding for headers; this padding was specified in the spec but was not enforced. Enabling this option will allow you to check your files, as it will return an E_NULLPAD error. It is possible for these files to have been written by older versions of libnetcdf. There is no effective problem caused by this lack of null padding, so enabling these options is informational only. The options for `configure` and `cmake` are `--enable-strict-null-byte-header-padding` and `-DENABLE_STRICT_NULL_BYTE_HEADER_PADDING`, respectively. See [Github #657](Unidata/netcdf-c#657) for more information. * [Enhancement] Reverted behavior/handling of out-of-range attribute values to pre-4.5.0 default. See [Github #512](Unidata/netcdf-c#512) for more information. * [Bug] Fixed error in tst_parallel2.c. See [Github #545](Unidata/netcdf-c#545) for more information. * [Bug] Fixed handling of corrupt files + proper offset handling for hdf5 files. See [Github #552](Unidata/netcdf-c#552) for more information. * [Bug] Corrected a memory overflow in `tst_h_dimscales`, see [Github #511](Unidata/netcdf-c#511), [Github #505](Unidata/netcdf-c#505), [Github #363](Unidata/netcdf-c#363) and [Github #244](Unidata/netcdf-c#244) for more information. ## 4.5.0 - October 20, 2017 * Corrected an issue which could potential result in a hang while using parallel file I/O. See [Github #449](Unidata/netcdf-c#449) for more information. * Addressed an issue with `ncdump` not properly handling dates on a 366 day calendar. See [GitHub #359](Unidata/netcdf-c#359) for more information. ### 4.5.0-rc3 - September 29, 2017 * [Update] Due to ongoing issues, native CDF5 support has been disabled by **default**. You can use the options mentioned below (`--enable-cdf5` or `-DENABLE_CDF5=TRUE` for `configure` or `cmake`, respectively). Just be aware that for the time being, Reading/Writing CDF5 files on 32-bit platforms may result in unexpected behavior when using extremely large variables. For 32-bit platforms it is best to continue using `NC_FORMAT_64BIT_OFFSET`. * [Bug] Corrected an issue where older versions of curl might fail. See [GitHub #487](Unidata/netcdf-c#487) for more information. * [Enhancement] Added options to enable/disable `CDF5` support at configure time for autotools and cmake-based builds. The options are `--enable/disable-cdf5` and `ENABLE_CDF5`, respectively. See [Github #484](Unidata/netcdf-c#484) for more information. * [Bug Fix] Corrected an issue when subsetting a netcdf3 file via `nccopy -v/-V`. See [Github #425](Unidata/netcdf-c#425) and [Github #463](Unidata/netcdf-c#463) for more information. * [Bug Fix] Corrected `--has-dap` and `--has-dap4` output for cmake-based builds. See [GitHub #473](Unidata/netcdf-c#473) for more information. * [Bug Fix] Corrected an issue where `NC_64BIT_DATA` files were being read incorrectly by ncdump, despite the data having been written correctly. See [GitHub #457](Unidata/netcdf-c#457) for more information. * [Bug Fix] Corrected a potential stack buffer overflow. See [GitHub #450](Unidata/netcdf-c#450) for more information. ### 4.5.0-rc2 - August 7, 2017 * [Bug Fix] Addressed an issue with how cmake was implementing large file support on 32-bit systems. See [GitHub #385](Unidata/netcdf-c#385) for more information. * [Bug Fix] Addressed an issue where ncgen would not respect keyword case. See [GitHub #310](Unidata/netcdf-c#310) for more information. ### 4.5.0-rc1 - June 5, 2017 * [Enhancement] DAP4 is now included. Since dap2 is the default for urls, dap4 must be specified by (1) using "dap4:" as the url protocol, or (2) appending "#protocol=dap4" to the end of the url, or (3) appending "#dap4" to the end of the url Note that dap4 is enabled by default but remote-testing is disbled until the testserver situation is resolved. * [Enhancement] The remote testing server can now be specified with the `--with-testserver` option to ./configure. * [Enhancement] Modified netCDF4 to use ASCII for NC_CHAR. See [Github Pull request #316](Unidata/netcdf-c#316) for more information. * [Bug Fix] Corrected an error with how dimsizes might be read. See [Github #410](Unidata/netcdf-c#410) for more information. * [Bug Fix] Corrected an issue where 'make check' would fail if 'make' or 'make all' had not run first. See [Github #339](Unidata/netcdf-c#339) for more information. * [Bug Fix] Corrected an issue on Windows with Large file tests. See [Github #385](Unidata/netcdf-c#385]) for more information. * [Bug Fix] Corrected an issue with diskless file access, see [Pull Request #400](Unidata/netcdf-c#400) and [Pull Request #403](Unidata/netcdf-c#403) for more information. * [Upgrade] The bash based test scripts have been upgraded to use a common test_common.sh include file that isolates build specific information. * [Upgrade] The bash based test scripts have been upgraded to use a common test_common.sh include file that isolates build specific information. * [Refactor] the oc2 library is no longer independent of the main netcdf-c library. For example, it now uses ncuri, nclist, and ncbytes instead of its homegrown equivalents. * [Bug Fix] `NC_EGLOBAL` is now properly returned when attempting to set a global `_FillValue` attribute. See [GitHub #388](Unidata/netcdf-c#388) and [GitHub #389](Unidata/netcdf-c#389) for more information. * [Bug Fix] Corrected an issue where data loss would occur when `_FillValue` was mistakenly allowed to be redefined. See [Github #390](Unidata/netcdf-c#390), [GitHub #387](Unidata/netcdf-c#387) for more information. * [Upgrade][Bug] Corrected an issue regarding how "orphaned" DAS attributes were handled. See [GitHub #376](Unidata/netcdf-c#376) for more information. * [Upgrade] Update utf8proc.[ch] to use the version now maintained by the Julia Language project (https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md). * [Bug] Addressed conversion problem with Windows sscanf. This primarily affected some OPeNDAP URLs on Windows. See [GitHub #365](Unidata/netcdf-c#365) and [GitHub #366](Unidata/netcdf-c#366) for more information. * [Enhancement] Added support for HDF5 collective metadata operations when available. Patch submitted by Greg Sjaardema, see [Pull request #335](Unidata/netcdf-c#335) for more information. * [Bug] Addressed a potential type punning issue. See [GitHub #351](Unidata/netcdf-c#351) for more information. * [Bug] Addressed an issue where netCDF wouldn't build on Windows systems using MSVC 2012. See [GitHub #304](Unidata/netcdf-c#304) for more information. * [Bug] Fixed an issue related to potential type punning, see [GitHub #344](Unidata/netcdf-c#344) for more information. * [Enhancement] Incorporated an enhancement provided by Greg Sjaardema, which may improve read/write times for some complex files. Basically, linked lists were replaced in some locations where it was safe to use an array/table. See [Pull request #328](Unidata/netcdf-c#328) for more information.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
re e-support UBS-599337
re pull request #405
re pull request #446
Notes:
It was re-issued as pull request Better detection of corrupt hdf5, second take. #446.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
NC_check_file_type.
to look for the magic number at offsets of 0, 512, 1024...
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.