Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.0.7 segfault at liblmdb.so.0.0.0 #2755

Closed
dvershinin opened this issue Jun 2, 2022 · 8 comments
Closed

v3.0.7 segfault at liblmdb.so.0.0.0 #2755

dvershinin opened this issue Jun 2, 2022 · 8 comments

Comments

@dvershinin
Copy link
Contributor

It appears in v3.0.7 that some changes have been done in regards to LMDB that cause a segfault in the library.

Environment:

  • CentOS 7 or CentOS Stream 8
  • nginx v1.22.0 compiled with latest release version of nginx security module and libmodsecurity v3.0.7 (compiled library with ./configure --with-lmdb --with-pcre2)
  • latest release of OWASP CRS

Result: any URLs being accessed trigger:

[2766481.294938] nginx[18205]: segfault at 7c ip 00007fad3894fd91 sp 00007ffcabc55b50 error 4 in liblmdb.so.0.0.0[7fad3894c000+14000]
[2766481.300369] Code: ff ff 48 8d 78 40 e8 3e ed ff ff e9 02 ff ff ff 66 0f 1f 84 00 00 00 00 00 41 57 41 56 41 55 41 54 55 53 48 89 fb 48 83 ec 18 <44> 8b 6f 7c 48 8b 6f 20 41 81 e5 00 00 02 00 4c 8b 7d 40 0f 84 86
@martinhsv
Copy link
Contributor

Hi @dvershinin ,

Are you able to provide a stack trace?

@dvershinin
Copy link
Contributor Author

@martinhsv

gdb nginx-debug /tmp/cores/core.nginx-debug.626957
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-16.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from nginx-debug...Reading symbols from /usr/lib/debug/usr/sbin/nginx-debug-1.22.0-4.el8.x86_64.debug...done.
done.

warning: Can't open file (null) during file-backed mapping note processing

warning: Can't open file (null) during file-backed mapping note processing

warning: Can't open file (null) during file-backed mapping note processing
[New LWP 626957]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `nginx: worker process                         '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f77fb751221 in mdb_txn_renew0 (txn=txn@entry=0x0) at mdb.c:2676
2676		MDB_env *env = txn->mt_env;
Missing separate debuginfos, use: yum debuginfo-install libnghttp2-1.33.0-3.el8_2.1.x86_64 ssdeep-libs-2.14.1-7.el8.x86_64
(gdb) info threads
  Id   Target Id                          Frame 
* 1    Thread 0x7f77fe47db80 (LWP 626957) 0x00007f77fb751221 in mdb_txn_renew0 (txn=txn@entry=0x0) at mdb.c:2676
(gdb) thread 1
[Switching to thread 1 (Thread 0x7f77fe47db80 (LWP 626957))]
#0  0x00007f77fb751221 in mdb_txn_renew0 (txn=txn@entry=0x0) at mdb.c:2676
2676		MDB_env *env = txn->mt_env;
(gdb) bt
#0  0x00007f77fb751221 in mdb_txn_renew0 (txn=txn@entry=0x0) at mdb.c:2676
#1  0x00007f77fb75289c in mdb_txn_begin (env=0x55db81b496a0, parent=parent@entry=0x0, flags=524288, flags@entry=0, ret=ret@entry=0x7fffe5a084a0) at mdb.c:2907
#2  0x00007f77fc75dbe0 in modsecurity::collection::backend::MDBEnvProvider::MDBEnvProvider (this=0x7f77fca3b6d0 <modsecurity::collection::backend::MDBEnvProvider::GetInstance()::instance>) at collection/backend/lmdb.cc:509
#3  0x00007f77fc75dcdb in modsecurity::collection::backend::MDBEnvProvider::GetInstance () at collection/backend/lmdb.cc:47
#4  modsecurity::collection::backend::LMDB::txn_begin (this=0x55db80ab3a60, flags=131072, ret=0x7fffe5a08538) at collection/backend/lmdb.cc:45
#5  0x00007f77fc75f31f in modsecurity::collection::backend::LMDB::resolveMultiMatches (this=0x55db80ab3a60, var="127.0.0.1_d7ed64233a3c9ab4b245cc581bfd1eb1f02ff333::::PREVIOUS_RBL_CHECK", l=0x7fffe5a08880, ke=...) at collection/backend/lmdb.cc:411
#6  0x00007f77fc75ca74 in modsecurity::collection::Collection::resolveMultiMatches (this=this@entry=0x55db80ab3a60, var="PREVIOUS_RBL_CHECK", compartment="127.0.0.1_d7ed64233a3c9ab4b245cc581bfd1eb1f02ff333", compartment2="", l=l@entry=0x7fffe5a08880, ke=...) at ../headers/modsecurity/collection/collection.h:177
#7  0x00007f77fc6b09c9 in modsecurity::variables::Ip_DictElement::evaluate (this=0x55db80b68110, t=0x55db81b358e0, rule=<optimized out>, l=0x7fffe5a08880) at /usr/include/c++/8/bits/basic_string.h:940
#8  0x00007f77fc7171b1 in modsecurity::RuleWithOperator::evaluate (this=0x55db80b69070, trans=0x55db81b358e0, ruleMessage=std::shared_ptr<class modsecurity::RuleMessage> (use count 2, weak count 0) = {...}) at rule_with_operator.cc:278
#9  0x00007f77fc713af9 in modsecurity::RuleWithActions::evaluate (this=0x55db80b69070, transaction=0x55db81b358e0) at /usr/include/c++/8/ext/atomicity.h:98
#10 0x00007f77fc707c2a in modsecurity::RulesSet::evaluate (this=0x55db81605a80, phase=<optimized out>, t=0x55db81b358e0) at rules_set.cc:210
#11 0x00007f77fc6f2344 in modsecurity::Transaction::processRequestBody() () at transaction.cc:979
#12 0x00007f77fca3f2f9 in ngx_http_modsecurity_pre_access_handler (r=0x55db81a44590) at ModSecurity-nginx-1.0.3/src/ngx_http_modsecurity_pre_access.c:212
#13 0x000055db7f7360d7 in ngx_http_core_generic_phase (r=0x55db81a44590, ph=0x55db81a4ac28) at src/http/ngx_http_core_module.c:898
#14 0x000055db7f73185d in ngx_http_core_run_phases (r=r@entry=0x55db81a44590) at src/http/ngx_http_core_module.c:876
#15 0x000055db7f731939 in ngx_http_handler (r=r@entry=0x55db81a44590) at src/http/ngx_http_core_module.c:859
#16 0x000055db7f73dc42 in ngx_http_process_request (r=r@entry=0x55db81a44590) at src/http/ngx_http_request.c:2180
#17 0x000055db7f777c01 in ngx_http_v2_run_request (r=0x55db81a44590) at src/http/v2/ngx_http_v2.c:3995
#18 0x000055db7f777dae in ngx_http_v2_state_header_complete (end=0x55db81af0c60 "", pos=0x55db81af0c60 "", h2c=0x55db81b30850) at src/http/v2/ngx_http_v2.c:1919
#19 ngx_http_v2_state_header_complete (h2c=0x55db81b30850, pos=0x55db81af0c60 "", end=0x55db81af0c60 "") at src/http/v2/ngx_http_v2.c:1896
#20 0x000055db7f778c16 in ngx_http_v2_state_field_len (h2c=h2c@entry=0x55db81b30850, pos=<optimized out>, end=end@entry=0x55db81af0c60 "") at src/http/v2/ngx_http_v2.c:1579
#21 0x000055db7f778e82 in ngx_http_v2_state_header_block (h2c=0x55db81b30850, pos=<optimized out>, end=0x55db81af0c60 "") at src/http/v2/ngx_http_v2.c:1495
#22 0x000055db7f7764d5 in ngx_http_v2_read_handler (rev=0x55db81ac0a00) at src/http/v2/ngx_http_v2.c:437
#23 0x000055db7f720ab8 in ngx_epoll_process_events (cycle=0x55db80a8b190, timer=<optimized out>, flags=<optimized out>) at src/event/modules/ngx_epoll_module.c:901
#24 0x000055db7f714b6a in ngx_process_events_and_timers (cycle=cycle@entry=0x55db80a8b190) at src/event/ngx_event.c:248
#25 0x000055db7f71e26d in ngx_worker_process_cycle (cycle=cycle@entry=0x55db80a8b190, data=data@entry=0x0) at src/os/unix/ngx_process_cycle.c:721
#26 0x000055db7f71cad7 in ngx_spawn_process (cycle=cycle@entry=0x55db80a8b190, proc=0x55db7f71e210 <ngx_worker_process_cycle>, data=0x0, name=0x55db7f7e2432 "worker process", respawn=respawn@entry=0) at src/os/unix/ngx_process.c:199
#27 0x000055db7f71fb5a in ngx_reap_children (cycle=0x55db80a8b190) at src/os/unix/ngx_process_cycle.c:598
#28 ngx_master_process_cycle (cycle=0x55db80a8b190) at src/os/unix/ngx_process_cycle.c:174
#29 0x000055db7f6f10c8 in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:383

@martinhsv
Copy link
Contributor

martinhsv commented Jun 7, 2022

I was able to reproduce a crash in CentOS7. The crux of the issue appears to be that the ModSecurity code that opens the physical files uses '.' -- current working directory (cwd) -- for the start of the file location. In this environment, cwd was '/' which is not directly writeable.

This is not a new issue is v3.0.7. I was able to cause the exact same problem in v3.0.6 with CentOS7. (However, since in v3.0.6 a different process creates and opens the files, it may be that the set of use cases where this will occur is not identical.)

What worked as a workaround for me was to 1) create empty, physical files where I want them then 2) create, within '/', symbolic links to those files. E.g.

cd /tmp
touch modsec-shared-collections
touch modsec-shared-collection-lock
chown nginx:nginx modsec-shared-collections
chown nginx:nginx modsec-shared-collections-lock

cd /
ln -s /tmp/modsec-shared-collections modsec-shared-collections
ln -s /tmp/modsec-shared-collections-lock modsec-shared-collections-lock
chown nginx:nginx modsec-shared-collections
chown nginx:nginx modsec-shared-collections-lock

Longer term we should probably consider one or both of the following:
a) Having the code use '.' for the file location in the call to mdb_env_open is limiting for the user, and makes things somewhat fragile to idiosyncrasies of different environments and what the process thinks the cwd is. It may be worth creating a new ModSecurity configuration directive so that the path can optionally be specified by the user.
b) It wouldn't hurt to have better error handling so that what is happening here is more obvious.

@dvershinin
Copy link
Contributor Author

a different process creates and opens the files

Thank you. Actually, this was precisely the breaking change for me:

  • Prior to 3.0.7, the collections file was apparently created with the master NGINX process, thus root user, due to which there was never a permissions issue
  • But as of 3.0.7 it is created with the NGINX worker process thus it needs nginx user to be able to write in the target directory...

It may be worth creating a new ModSecurity configuration directive

I highly agree with having that.

@martinhsv
Copy link
Contributor

martinhsv commented Jun 8, 2022

Yes, that's right. Since it is an nginx worker thread that now does this, what it thinks is its working directory and what its permissions are matter.

Besides the workaround that I suggested yesterday, another option is to make use of the nginx configuration item 'working_directory'. This allows you to specify the working directory of a worker process. E.g.

working_directory /tmp;

For most users this is probably both a simpler and more natural way to handle this. One can use whatever directory one wishes, as long as the nginx user has the appropriate permissions.

dvershinin added a commit to GetPageSpeed/ModSecurity that referenced this issue Jun 9, 2022
Make transactions no-op if the file handle is invalid
@dvershinin
Copy link
Contributor Author

dvershinin commented Jun 9, 2022

@martinhsv hope you can have a look at the pull request #2761 I made. My main goal is to prevent the segfault if users misconfigured their environment (which as of v3.0.7 becomes very easy even if libmodsecurity and nginx were installed with the correct permissions/configuration at first).

If they change the user directive in NGINX configuration this will cause a segfault because the collections file cannot be created/accessed). The pull request makes LMDB no-operational for such a case, but at least NGINX won't crash miserably.

@martinhsv
Copy link
Contributor

martinhsv commented Jun 10, 2022

I'm not sure I understand what you mean regarding the nginx 'user' directive. If one were to set user root, then the worker processes would run with root as the process owner, and everything should work. I wouldn't recommend that, however.

I think it's safer to leave the worker processes running as the user 'nginx'. And adjust the file location (and permissions if necessary):

  1. Set the working_directory for the worker processes to a directory that is writeable by nginx
  2. if this is a ModSecurity upgrade and you want to retain your old lmdb data, move the two modsec-shared-collection* files to the directory specified in (1), and change the ownership of the files to nginx:nginx

(I'll comment on your PR separately.)

@martinhsv
Copy link
Contributor

Closed via #2761

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants