Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx + modsec high memory usage - increases depending on number of vhosts #92

Closed
vaLski opened this issue Feb 7, 2018 · 16 comments
Closed
Assignees
Labels

Comments

@vaLski
Copy link

vaLski commented Feb 7, 2018

Hello.

I have compiled nginx 1.13.7 + modsecurity 3.0 + https://github.com/SpiderLabs/ModSecurity/pull/1667/files patch

modsecurity is enabled globally in the http section for all virtual hosts

If modsecurity is enabled and number of vhosts are increased nginx workers are starting to consume more memory.

I can see a clear relationship between the number of modsecurity rules, the number of vhosts and the memory consumption

If I have a lot of vhosts and a lot of modsec rules memory usage is high

If I decrease either the number of rules or the number of virtual hosts memory footprint starts to drop

It appears to me that instead of storing all modsec rules on a central place in the memory and doing lookups/searches in there, you are allocating separate modsecurity structures that include all rules per server/virtual host. Not sure.

See the table below:

version - nginx version
vhosts - number of vhosts
modsec - number of modsec rules
virt - virtual memory
rss - rss memory consumption
mem/vhost - rough calculation rss / vhosts
rules_mem - rough calculation rss (modsec on) - rss (modsec off)
mem/rule - rough calculation rss / rules

From that it appears to me that memory is growing faster depending on the number of virtual hosts not on the number of rules we have. Not sure why is that when modsec is included globally into the http section.

version vhosts modsec rules virt rss mem/vhost rules_mem ~rough mem/rule
1.10.3 14 no 0 141m 19.51m 1.39m 0 0
1.10.3 1779 no 0 823m 702m 0.39m 0 0
1.10.3 1779 no 0 823m 703m 0.39m 0 0
1.13.7 1779 no 0 854m 732m 0.41m 0 0
1.13.7 1779 off 0 1007m 814m 0.45m 0 0
1.13.7 1779 off 0 1269m 1077m 0.60m 0 0
1.13.7 1779 off 0 1269m 1077m 0.60m 0 0
1.13.7 1779 off 0 1277m 1126m 0.63m 0 0
1.13.7 1779 on 0 1277m 1126m 0.63m 0 0
1.13.7 1779 on 1661 2541m 2342m 1.31m 1216m 0.73m
1.13.7 1779 on 3288 3306m 3063m 1.72m 2249m 0.68m
1.13.7 1779 on 3288 3798m 3575m 2.00m 2449m 0.74m
1.13.7 891 on 3288 2021m 1768m 1.98m 642m 0.19m

I hope this helps.

@LeeShan87
Copy link

Hi @vaLski I had a similar issue.

Have you tried to add every common ModSecurity configurations is http, and only store differences in server and location sections?

Like:

http {
    modsecurity_rules_file /path/to/default/modsec.conf;
    server {
        modsecurity_rules_file /path/to/server_specific/diff/modsec.conf;
        location / {
              modsecurity_rules_file /path/to/location_specific/diff/modsec.conf;
        }
    }
}

/path/to/server_specific/diff/modsec.conf should only contain: SecRuleEngine , or SecRuleRemoveById

This configuration dramatically reduced ModSecury-nginx memory consumption for me.

@vaLski
Copy link
Author

vaLski commented Feb 14, 2018

@LeeShan87 I also enabled and included modsecurity configuration only inside the http section. Currently I do not have per server directive includes and the memory footprint is still huge.

http { modsecurity on; modsecurity_rules_file /etc/nginx/mod_security.conf; ... }

@AnoopAlias
Copy link

AnoopAlias commented Feb 14, 2018 via email

@vaLski
Copy link
Author

vaLski commented Feb 14, 2018

@AnoopAlias this is exactly what I am experiencing and which is also shown by the tests I performed initially.

Memory is increased dramatically not with the number of rules I include but with the number of vhosts i add.

The very similar number of vhosts/rules ratio on apache keeps the footprint in the MB range while nginx is rocketing to the GB range.

@AnoopAlias
Copy link

Yes what I don't understand is nginx is offering the v3 along with the commercial nginx enterprise edition and not sure how they are dealing with it. Perhaps @defanator can shed some light?

From what I have tested on shared servers ( typically the servers I work with have 500+ vhosts) . It's almost unusable at this stage

@victorhora
Copy link
Contributor

@vaLski @AnoopAlias Can you let us know if you see the same results with the v3/dev/performance branch and PR #80?

Thanks.

@victorhora victorhora assigned victorhora and unassigned victorhora Feb 14, 2018
@vaLski
Copy link
Author

vaLski commented Feb 15, 2018

@victorhora I just tested modsecurity from performance branch and nginx built with modsecurity-nginxconnector from #80 but this does not make any difference in the overall memory consumption. The numbers looks exactly like the ones from my initial post.

@AirisX
Copy link
Contributor

AirisX commented Feb 16, 2018

The PR #80 mostly gives an advantage when you are trying to reload Nginx multiple times.

Probably the problem hides in part of merging the server and location configuration of the module.

@AirisX
Copy link
Contributor

AirisX commented Feb 16, 2018

@vaLski Memory consumption is growing dynamically in time?

@vaLski
Copy link
Author

vaLski commented Feb 19, 2018

@AirisX does not look like that. As far as I can see on one machine which is in production memory consumption looks stable so it does not appear to be leaking.

@AnoopAlias
Copy link

@victorhora - Used v3/dev/performance + Applied pr #80 and applied SpiderLabs/owasp-modsecurity-crs#995

Its going good so far .I will keep you all posted of more test results

@vikas027
Copy link

Hey @AnoopAlias ,

We too are facing similar issues. Just curious to see how are you finding memory usage now.

Cheers!

@AnoopAlias
Copy link

I provide a custom rpm for use on hosting servers.so cannot actually force/test mod_sec much on my client's servers due to the various instabilities it has now. The latest rpm I compiled use v3/master as I see it is getting more updates than v3/dev/performance along with #80 patch applied to the connector. So when I get a bit time, I would do my test and report here.

@vikas027
Copy link

I have too used v3/dev/performance + Applied pr #80 as recommended by @AnoopAlias and run some tests. All looks good now, no memory leak now :)

CC: @victorhora

@victorhora
Copy link
Contributor

Appreciate the feedback @vikas027 @AnoopAlias @vaLski.

FYI, the v3/dev/performance branch was merged to v3/master since the 20th/Feb and since then a number of fixes and improvements also were added so it might be a good idea to rebuild from master.

As for #80, as everyone is getting good results from it including @defanator's performance tests from late Feb I think it might be a good idea to merge it to main of nginx-connector. What do you think @zimmerle?

@zimmerle
Copy link
Contributor

#80 is now merged. Sorry for the delay. Assuming that this issue is now closed. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants