-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault during collection cleanup on possibly corrupted dbm data #3082
Comments
Hi @twouters, thanks for reporting this issue. I don't remember now, but may be @marcstern already sent a fix against this bug, but there was a strange issue so we had to revert many PR's (see #3074). Now we are working on the CI workflow for mod_security2 too (v3 already has it), then we can start to re-send the PR's. I will let you know here if the CI will done - after that if you think you can send a PR to fix this issue. Thanks again. |
To be fair: my proposed patch only prevents a crash when this situation is triggered, it doesn't prevent the cause. Nor did I take the time to figure out why value_len became 0, so it's more of a quick-fix. We use a cronjob to clean up the persistent collections with sdbm-util because parsing the collection tends to become too slow when the files become too big. This happens while apache is running, which might be the cause of the corruption, or it's just because persistent collections aren't entirely thread-safe. Hard to tell. Maybe I should modify the patch to produce an error log when it happens, so we can keep track and figure out (on the long run) if it still happens if we stop apache before cleaning up the collections. I'm running thousands of servers and only a handful of servers run into this issue after a span of several months, so it'll take time to get to the bottom of this unless someone with knowledge of the sdbm data structure (or handling of the data) can help with reproducing the bug. If you're OK with accepting my proposed patch (by the time we know it's not already fixed by Marc), I'll be happy to push a PR for it. Just know that I won't be able to produce a proper test case at this time because I don't know how to reliably trigger the proper conditions. |
@marcstern could you take a look at this issue? |
Hi @twouters, I never encountered this, so I didn't fix this. I think your patch is incorrect. Example:
With your patch, it won't return and we'll copy memory up to offset 5. What about a more explicit check: And the same check for var->name_len I imagine (anything could be corrupted). I would merge such a fix. |
we were already running with a modified patch, but checking for < 1 seems more appropriate indeed:
I'll push a PR |
When var->value_len somehow becomes 0, we risk wrapping around to 4294967295 due to it being an unsigned int. Fixes owasp-modsecurity#3082
When var->value_len somehow becomes 0, we risk wrapping around to 4294967295 due to it being an unsigned int. Fixes owasp-modsecurity#3082
Describe the bug
Apache segfaults during processing of requests when the persistent collection is being processed to remove stale records.
Logs and dumps
Note: IP addresses have been obfuscated
To Reproduce
Not sure, I guess: have corrupt persistent collection data files (I made backups, if you're interested) and perform some specific requests to trigger a segfault (don't known exactly when it triggers).
Expected behavior
Webserver doesn't crash
Server (please complete the following information):
Additional context
ModSecurity/apache2/persist_dbm.c
Line 71 in 705002b
collection_unpack()
checks ifblob_offset + var->value_len
expands beyondblob_size
, but then runsapr_pstrmemdup()
with a size ofvar->value_len - 1
. Ifvar->value_len = 0
the size value will wrap around to4294967295
.The following patch might help:
The text was updated successfully, but these errors were encountered: