Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only load module ngx_http_modsecurity_module.so when option enable-mo… #4119

Merged
merged 1 commit into from
Jun 11, 2019
Merged

Only load module ngx_http_modsecurity_module.so when option enable-mo… #4119

merged 1 commit into from
Jun 11, 2019

Conversation

tammert
Copy link
Contributor

@tammert tammert commented May 25, 2019

What this PR does / why we need it:
Ensures the load_module directive for ngx_http_modsecurity_module.so is only present in nginx.conf when enable-modsecurity: true is present in the ConfigMap, instead of always. For everyone not using ModSecurity, this should provide a slightly more lean application.

Which issue this PR fixes:
Fixes #3842

Special notes for your reviewer:
ModSecurity can be enabled on via ConfigMap or Annotation. However, in the documentation for the Annotation it clearly states:

The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap.

Because of this, I feel like we can simply place the loading of the module behind the ConfigMap option.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 25, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @tammert. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label May 25, 2019
@@ -16,7 +16,9 @@ pid {{ .PID }};
load_module /etc/nginx/modules/ngx_http_geoip2_module.so;
{{ end }}

{{ if $all.Cfg.EnableModsecurity }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I missed your "special notes"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I said above still holds: #4119 (comment)

@ElvinEfendi
Copy link
Member

ModSecurity can be enabled on via ConfigMap or Annotation. However, in the documentation for the Annotation it clearly states:

hrm, looking at implementation and the e2e tests https://github.com/kubernetes/ingress-nginx/blob/master/test/e2e/annotations/modsecurity.go I think that documentation is not correct.

@tammert
Copy link
Contributor Author

tammert commented May 26, 2019

Actually, I would argue that the current implementation for enabling it for separate locations might be flawed.

The background for this: we first tried configuring it with the Ingress Annotation, however that resulted in the situation where /etc/nginx/modsecurity/modsecurity.conf and /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf where loaded for every location. In our instance (and I would expect for everyone else), this caused a massive increase in memory use, something like 50-100MB for each location. NGINX eventually ran with around 3GB of memory before crashing the node (and us implementing a ResourceLimit for NGINX...).

We solved this situation by loading all rule files once, in the HTTP block. Then, for each Ingress, we used nginx.ingress.kubernetes.io/modsecurity-snippet to determine whether SecRuleEngine was On, Off or DetectionOnly (and any other configuration we wanted for that particular Ingress).

In that design, the top-level ConfigMap configuration would be the place to arrange all this, but that would result in a larger refactor than the original scope of the issue/PR. What do you think?

@ElvinEfendi
Copy link
Member

@tammert are you saying it is not possible to enable modsecurity per location? (I never used modsecurity personally). Can we not fix the existing implementation instead to make sure when you set the annotation only it enables it in the respective locations and nowhere else? Can you add some more assertion to the e2e tests to show that in fact it gets configured for all location and not just for the locations where it is intended to be enabled via annotation?

@tammert
Copy link
Contributor Author

tammert commented May 26, 2019

No it actually works the way it's been designed: if you set the Annotation on an Ingress, those locations will have ModSecurity enabled (without having to configure it instance-level, in the http{} block, via the ConfigMap). However, in practice, I have found that for each location in the Ingress that contains the Annotation, the rules are loaded. This causes a lot of unnecessary memory usage, since the rules are actually pretty heavy.

So let's say you have an Ingress with the Annotation that contains a total of 10 locations, spread over any number of hostnames. In this case, by setting the Annotation on the Ingress, you would load all rule files 10 times, which ModSecurity seems to hold in memory, thus increasing memory for NGINX by a lot.

Because of this, I would argue that actually configuring (at the very least) the core rule sets for each location is not good practice. If you load them once on instance level, in the http{} block, all locations can use the loaded rule sets and override them if necessary. With that design, you have minimal extra memory usage, but still the flexibility to configure a per Ingress configuration.

Does that make more sense?

@ElvinEfendi
Copy link
Member

@tammert that makes sense to me. Basically when the configmap setting is enabled then modesecurity will be configured in HTTP level and be enabled for all locations. And if we want a location to not have it, then an annotation can be used to whitelist a specific ingress locations.

@aledbf what do you think?

@tammert
Copy link
Contributor Author

tammert commented May 26, 2019

@ElvinEfendi exactly. And what might be good to realize is that by default, ModSecurity runs in DetectionOnly mode. So if someone enables it via ConfigMap, it would not automatically start blocking requests everywhere yet; for that, the user would need to use the per Ingress annotation nginx.ingress.kubernetes.io/modsecurity-snippet to override the DetectionOnly to On.

I think if we manage to convey this design properly via the documentation it should be much easier for people to start effectively using ModSecurity.

@aledbf
Copy link
Member

aledbf commented May 26, 2019

The background for this: we first tried configuring it with the Ingress Annotation, however that resulted in the situation where /etc/nginx/modsecurity/modsecurity.conf and /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf where loaded for every location.

Are you sure? This PR was merged this week #4091

@tammert
Copy link
Contributor Author

tammert commented May 26, 2019

@aledbf I was actually working with the recent codebase, so yes. Good to see someone else has noticed this as well though :). However, that PR only fixes the situation where the ConfigMap contains `enable-modsecurity: "true".

A valid configuration that still has this issue: nothing configured for ModSecurity in the ConfigMap, with an Annotation on the Ingress. In this case, every location in the Ingress (which, in my experience, would probably be more than a single location) would still load the rules, resulting in multiple loads with corresponding memory use. By putting the loading of rules entirely behind the ConfigMap configuration it would be simpler for the user to configure, with almost no chance for multiple loads.

Again, I hope this makes sense! I struggled to get it properly configured when I started using this functionality and I hope I can improve it for others.

@aledbf
Copy link
Member

aledbf commented Jun 3, 2019

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jun 3, 2019
@codecov-io
Copy link

codecov-io commented Jun 4, 2019

Codecov Report

Merging #4119 into master will increase coverage by 0.1%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master    #4119     +/-   ##
=========================================
+ Coverage   57.66%   57.77%   +0.1%     
=========================================
  Files          87       87             
  Lines        6463     6479     +16     
=========================================
+ Hits         3727     3743     +16     
  Misses       2306     2306             
  Partials      430      430
Impacted Files Coverage Δ
internal/ingress/controller/template/template.go 84.69% <100%> (+0.45%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2879309...c11583d. Read the comment docs.

@tammert
Copy link
Contributor Author

tammert commented Jun 4, 2019

/retest
So I worked out some logic which should support the current situation. I still believe that the current setup actually allows an inexperienced user to load the rules more than once (thereby increasing memory load), but that might be something for a different PR :). This should take care of the conditional loading of the module!

@tammert
Copy link
Contributor Author

tammert commented Jun 5, 2019

@ElvinEfendi @aledbf tests seem to have passed so take a look at the code when you get a chance :) If I need to rebase or squash the commits, let me know.

@aledbf
Copy link
Member

aledbf commented Jun 5, 2019

@tammert I prefer to do this kind of thing in code and not in the template

{{ end }}
{{ end }}
{{ end }}
{{ end }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please introduce a new function (i.e shouldLoadModSecurityModule($servers)) in template.go and move this logic to there. That way you can have a unit test for it too.

Then below you can just call that function.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jun 10, 2019
@tammert
Copy link
Contributor Author

tammert commented Jun 10, 2019

@ElvinEfendi @aledbf thanks so much for the feedback! I've learned a lot from this PR.

The code should be much neater now: I've introduced the shouldLoadModSecurityModule func as suggested, which contains all the logic for loading/not-loading of the module. Unit test is included as well. Let me know if you have any more comments (or when it's time to squash the commits).

@ElvinEfendi
Copy link
Member

/approve

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 11, 2019
@ElvinEfendi
Copy link
Member

@tammert ready to squash the commits now.

@tammert
Copy link
Contributor Author

tammert commented Jun 11, 2019

@ElvinEfendi commit squashing is done.

@aledbf
Copy link
Member

aledbf commented Jun 11, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 11, 2019
@aledbf
Copy link
Member

aledbf commented Jun 11, 2019

@tammert thanks!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aledbf, ElvinEfendi, tammert

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 8cee8d5 into kubernetes:master Jun 11, 2019
@tammert tammert deleted the conditional-modsecurity-module-load branch June 11, 2019 16:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ModSecurity module is always loaded, even when not active
5 participants