Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add staging-with-rebase-focal to CI and fix testinfra tests #5638

Merged
merged 12 commits into from
Dec 7, 2020

Conversation

kushaldas
Copy link
Contributor

@kushaldas kushaldas commented Nov 17, 2020

Status

Ready for review

Description of Changes

Fixes #5636 #5635 #5637 #5655 #5510

  • Adds CI target to staging reabase on Focal
  • Disables systemd-resolved on Focal and uses /etc/resolv.conf file.
  • Updates test to find ports open for listening in mon server
  • Uses single dhclient path for both Xenial and Focal testing
  • Updates standard gpg output tests on servers for both Xenial and Focal.
  • OSSEC used sysv script on Xenial, and ossec.service systemd style service file on Focal.
  • Adds new test verify how ossec service started on Xenial and Focal.

https://kushaldas.in/posts/story-of-debugging-exit-0.html describes why do we need two different service files for ossec-sever and ossec-agent.

Testing

  • CI is green.
  • molecule test -s libvirt-staging-focal should not have any test failure on test_apparmor.py, there will 4 other test failure due to different bugs.
  • Ossec logs are flowing between app and mon, and this persists reboots
  • Changes to the DNS configuration make sense here (we are reusing the existing resolv.conf templates)

Checklist

If you made changes to the system configuration:

If you made non-trivial code changes:

  • I have written a test plan and validated it for this PR

Choose one of the following:

  • I have opened a PR in the docs repo for these changes, or will do so later
  • I would appreciate help with the documentation
  • These changes do not require documentation

emkll
emkll previously requested changes Nov 17, 2020
Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening @kushaldas as well as #5635, #5636 and #5637 to track specific failures.

As part of the commitment for this sprint of a timebox around #5509 , I recommend we proceed as follows, since we currently have no way of reliably tracking failure count on Focal:

  1. Amend this PR to add a CircleCI task for staging-with-rebase-focal (it's OK if we merge with staging-with-rebase-focal at the end of our timebox) it seems best we track failures/improvements over time by having CI run it instead of a developer running these locally.
    (note that the existing GCP CI base image should already have Vagrant 20.04 boxes included, so the change should be relatively straightforward)
  2. Continue addressing the test failures in this working branch (including the issues you've opened, and whichever other testinfra failures present in Focal

@kushaldas kushaldas force-pushed the fix_dhclient_path_for_testinfra branch 4 times, most recently from 4e8d788 to 5a110a0 Compare November 18, 2020 11:50
@emkll emkll changed the title Fixes #5636 uses right path of dhclient in Focal Add staging-with-rebase-focal to CI and fix testinfra tests Nov 23, 2020
@eloquence eloquence mentioned this pull request Nov 23, 2020
53 tasks
@kushaldas
Copy link
Contributor Author

kushaldas commented Nov 27, 2020

Right now, if you login to the mon-staging and then do systemctl status ossec, you will find it inactive(dead). This should be starting up. If you manually start the service, then everything is working.

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes @kushaldas , left some inline comments. I've also appended a commit to clean up the test logic (lsof version is different, so the UDP check would always fail on Focal).

There is, however, still an issue with the ossec service on mon, and the service does not start due to one of the services (ossec@maild.service) that is failing. By running systemctl list-units --all , you can observe the units that are not started, and then check the status of them. The only failed service is ossec@maild.service, and then by checking its status, the root cause of the failure is clear:

ossec-maild [dns]: ERROR: connect() failed.
ossec-maild: ERROR: DNS failure for smtpserver
ossec-maild: ERROR: No socket.
ossec-maild: ERROR: Error Sending email to 127.0.0.1 (smtp server)

This might be related to #5655, perhaps we should pull in the fix for that issue in this PR, what do you think?

src: ossec@.service
dest: "/etc/systemd/system/ossec@.service"

- name: Enable the service
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you should use the ansible built-in systemd task:

- name : Enable ossec service to run
  systemd:
    name: "{{ item }}"
    enabled: yes
    masked: no
  with_items: "{{ some var in defaults/main.yml }}"

service:
name: ossec
state: restarted
command: systemctl restart ossec
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe systemd ansible built-in here as well

@kushaldas
Copy link
Contributor Author

@emkll I think the PR is in a better state now, will know for sure after all CI jobs finish :)

@kushaldas
Copy link
Contributor Author

I still have the SMTP issues:

Nov 30 12:26:00 mon-staging env[761]: 2020/11/30 12:26:00 ossec-maild(1223): ERROR: Error Sending email to 127.0.0.1 (smtp server)
Nov 30 12:26:15 mon-staging env[927]: 2020/11/30 12:26:15 ossec-maild: DEBUG: Running OS_Sendmail()
Nov 30 12:26:15 mon-staging env[762]: 2020/11/30 12:26:15 ossec-maild [dns]: ERROR: connect() failed.
Nov 30 12:26:15 mon-staging env[927]: 2020/11/30 12:26:15 ossec-maild: ERROR: DNS failure for smtpserver
Nov 30 12:26:15 mon-staging env[927]: 2020/11/30 12:26:15 ossec-maild: ERROR: No socket.
Nov 30 12:26:20 mon-staging env[761]: 2020/11/30 12:26:20 ossec-maild(1261): ERROR: Waiting for child process. (status: 256).
Nov 30 12:26:20 mon-staging env[761]: 2020/11/30 12:26:20 ossec-maild(1223): ERROR: Error Sending email to 127.0.0.1 (smtp server)
Nov 30 12:26:20 mon-staging env[761]: 2020/11/30 12:26:20 ossec-maild(1262): ERROR: Too many errors waiting for child process(es).
Nov 30 12:26:20 mon-staging env[761]: 2020/11/30 12:26:20 ossec-maild(1223): ERROR: Error Sending email to 127.0.0.1 (smtp server)

@emkll any tips about which service should I look at?

@kushaldas
Copy link
Contributor Author

I am rebuilding xenial staging on develop to check what is happening.

@kushaldas
Copy link
Contributor Author

Wondering if ossec/ossec-hids#1891 is involved here?

`sbin/dhclient` string is the correct path in Xenial and on Focal.

In Xenial the file is at `/sbin/dhclient`.
In Focal the file is at `/usr/sbin/dhclient`
This will run the Focal staging job in the CI.
@emkll emkll force-pushed the fix_dhclient_path_for_testinfra branch from 94d81f7 to 253d45b Compare November 30, 2020 21:43
@emkll
Copy link
Contributor

emkll commented Nov 30, 2020

Rebased on latest develop to include changes from #5658

@emkll
Copy link
Contributor

emkll commented Nov 30, 2020

Good news, the box update provided in #5658 has resolved the issue tracked in #5642 , one last test is failing, the service not running/listening to the port.

Bad news, some ossec units are not staying up, including remoted which receives the logs. While I haven't uncovered the root cause, here are some findings:

  • If you molecule converge and run sudo service ossec start on mon server, the test does pass, which suggests there may be an issue with the systemd units

  • I've observed a log error when running sudo systemctl start ossec@remoted and then systemctl status ossec@remoted, ossec-remoted would only bind to an IPv4 address. Applying the following patch d59cd83 seems to now show the binding to IPv4 in the systemctl status call, but doesn't make the tests green. It seems like ossec-remoted it receives SIGTERM from something, and is immediately killed. As a result, the test still fails.

Could there be, perhaps, some contention with the existing init scripts that are set by default in the server postinst here: https://github.com/freedomofpress/securedrop/blob/fix_dhclient_path_for_testinfra/install_files/ossec-server/DEBIAN/postinst#L196-L198

@kushaldas
Copy link
Contributor Author

Here is the journalctl log from ossec@remoted

-- Logs begin at Tue 2020-12-01 09:56:35 UTC, end at Tue 2020-12-01 10:18:15 UTC. --
Dec 01 10:08:30 mon-staging systemd[1]: Starting The OSSEC HIDS remoted server...
Dec 01 10:08:31 mon-staging systemd[1]: Started The OSSEC HIDS remoted server.
Dec 01 10:08:31 mon-staging env[762]: 2020/12/01 10:08:31 ossec-remoted: INFO: Started (pid: 762).
Dec 01 10:08:31 mon-staging env[763]: 2020/12/01 10:08:31 IPv6: :: on port 1514
Dec 01 10:08:31 mon-staging env[763]: 2020/12/01 10:08:31 Socket bound for IPv6: :: on port 1514
Dec 01 10:08:31 mon-staging env[763]: 2020/12/01 10:08:31 ossec-remoted: INFO: Started (pid: 763).
Dec 01 10:08:31 mon-staging env[763]: 2020/12/01 10:08:31 ossec-remoted(1225): INFO: SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
Dec 01 10:08:31 mon-staging systemd[1]: ossec@remoted.service: Succeeded.
Dec 01 10:18:15 mon-staging systemd[1]: Starting The OSSEC HIDS remoted server...
Dec 01 10:18:15 mon-staging systemd[1]: Started The OSSEC HIDS remoted server.
Dec 01 10:18:15 mon-staging env[1136]: 2020/12/01 10:18:15 ossec-remoted: INFO: Started (pid: 1136).
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 IPv6: :: on port 1514
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 Socket bound for IPv6: :: on port 1514
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 ossec-remoted: INFO: Started (pid: 1137).
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 ossec-remoted(1225): INFO: SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
Dec 01 10:18:15 mon-staging systemd[1]: ossec@remoted.service: Succeeded.

@kushaldas
Copy link
Contributor Author

More strange log difference between service and the systemd:

root@mon-staging:/home/vagrant# systemctl status ossec@remoted
● ossec@remoted.service - The OSSEC HIDS remoted server
     Loaded: loaded (/etc/systemd/system/ossec@.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Tue 2020-12-01 10:18:15 UTC; 5min ago
    Process: 1135 ExecStartPre=/usr/bin/env /var/ossec/bin/ossec-remoted -t (code=exited, status=0/SUCCESS)
    Process: 1136 ExecStart=/usr/bin/env /var/ossec/bin/ossec-remoted -f (code=exited, status=0/SUCCESS)
   Main PID: 1136 (code=exited, status=0/SUCCESS)

Dec 01 10:18:15 mon-staging systemd[1]: Starting The OSSEC HIDS remoted server...
Dec 01 10:18:15 mon-staging systemd[1]: Started The OSSEC HIDS remoted server.
Dec 01 10:18:15 mon-staging env[1136]: 2020/12/01 10:18:15 ossec-remoted: INFO: Started (pid: 1136).
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 IPv6: :: on port 1514
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 Socket bound for IPv6: :: on port 1514
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 ossec-remoted: INFO: Started (pid: 1137).
Dec 01 10:18:15 mon-staging env[1137]: 2020/12/01 10:18:15 ossec-remoted(1225): INFO: SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
Dec 01 10:18:15 mon-staging systemd[1]: ossec@remoted.service: Succeeded.
root@mon-staging:/home/vagrant# ps aux | grep ossec
ossecm      1896  0.0  0.2   4108  2612 ?        S    10:22   0:00 /var/ossec/bin/ossec-maild
ossecm      1897  0.0  0.2   4108  2672 ?        S    10:22   0:00 /var/ossec/bin/ossec-maild
ossec       1905  0.0  0.4   5648  4496 ?        S    10:22   0:00 /var/ossec/bin/ossec-analysisd
ossecr      1914  0.0  0.2  20268  2620 ?        Sl   10:22   0:00 /var/ossec/bin/ossec-remoted
root        1915  0.0  0.2   3572  1972 ?        S    10:22   0:00 /var/ossec/bin/ossec-logcollector
root        1920  0.0  0.0   3744   464 ?        S    10:22   0:00 /var/ossec/bin/ossec-syscheckd
ossec       1924  0.0  0.1   3808  1480 ?        S    10:22   0:00 /var/ossec/bin/ossec-monitord
ossecm      1976  0.0  0.0      0     0 ?        Z    10:23   0:00 [ossec-maild] <defunct>
root        1978  0.0  0.0   8908   668 pts/1    S+   10:23   0:00 grep --color=auto ossec

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @kushaldas this is working for me, and can confirm logs are flowing between app and mon on Focal, and CI is green 🎉

Opened #5660 and #5661 to track unrelated Focal follow-up issues .

I left a comment inline that should be worth addressing. it's unclear to me from your commit message if a part of the issue observed was some sort of init/systemd contention. Regardless, the change provided won't persist upgrades, if we don't make any changes.

We should ensure that changes to DNS (moving away from the default systemd-resolvd and back to resolv.conf) given that it will break unattended upgrades (as identified in #5655), that would be especially problematic. @conorsch what do you think?


- name: Remove the old style /etc/init.d/ossec file
file:
path: "/etc/init.d/ossec"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These files are provided by ossec-{agent,server} packages:

.
├── etc
│   ├── init.d
│   │   └── ossec
│   └── ossec-init.conf

This means that on subsequent ossec package upgrades, this init file will be re-added, unless we do specific tasks at build-time to remove it. If these steps are strictly required for functioning, we should address this in the build logic as to not provide the init.d file in focal (instead of doing it in ansible)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that on subsequent ossec package upgrades, this init file will be re-added, unless we do specific tasks at build-time to remove it. If these steps are strictly required for functioning, we should address this in the build logic as to not provide the init.d file in focal (instead of doing it in ansible)

I think we should remove it from the server package.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agree, let's remove the Ansible task and clean up that file at the packaging level. Be mindful of side-effects for Xenial.

@codecov-io
Copy link

codecov-io commented Dec 1, 2020

Codecov Report

Merging #5638 (35ab7a6) into develop (1d50b6e) will decrease coverage by 0.08%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop    #5638      +/-   ##
===========================================
- Coverage    85.40%   85.32%   -0.09%     
===========================================
  Files           50       50              
  Lines         3679     3679              
  Branches       460      460              
===========================================
- Hits          3142     3139       -3     
- Misses         438      440       +2     
- Partials        99      100       +1     
Impacted Files Coverage Δ
securedrop/source_app/main.py 90.20% <0.00%> (-1.55%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d50b6e...35ab7a6. Read the comment docs.

Copy link
Contributor

@conorsch conorsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a pass through with local VMs, looking good! Agreed with the packaging logic cleanup requested by @emkll. Overall the DNS changes appear reliable, although let's keep the customizations to a minimum. For instance, it appears that on Focal we don't need /etc/resolvconf/resolv.conf.d/base at all, so let's not write to it.


- name: Remove the old style /etc/init.d/ossec file
file:
path: "/etc/init.d/ossec"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agree, let's remove the Ansible task and clean up that file at the packaging level. Be mindful of side-effects for Xenial.

when: ansible_distribution_release == 'focal'
tags:
- dns
- hardening
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No objections to disabling systemd-resolved, sticking with resolvconf which we've been using for a while will be fairly straightforward. Since we just copied the same dns_base source a few lines above, let's use that task to write the file. Sounds like on Xenial we want the /etc/resolvconf/resolve.conf.d/ path, whereas on Focal we should write it directly to /etc/resolv.conf.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a commit to consolidate the logic here a bit: under Focal, the old /etc/resolvconf/resolve.conf.d/ path is no longer written to, and the tests now inspect the correct file based on distro.

We remove the /etc/init.d/ossec file and using the systemd
service file in the ossec-server package.
Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a couple of areas that are using service instead of systemctl, all of which appear to be still working:

If we are to move to systemd for Focal (which I think is a good idea, as you've suggested), we still need to preserve init.d/service support under Xenial (unless we carefully handle the service enabling/starting under Xenial as well). This means that we will need to conditionally handle the service logic here (init in Xenial and systemd in Focal).

It could perhaps be helpful to take a step back and map out these changes to better understand implications, but at at strict minimum, prior to merging this PR, we should ensure no changes to Xenial, in other words, the systemctl logic you have introduced for Focal support should not be applied to Xenial hosts as to not break existing installs during upgrades.

We verify that Xenial uses sysv script, and Focal is using
the ossec.service file to start the service in the mon server.
@kushaldas kushaldas force-pushed the fix_dhclient_path_for_testinfra branch from 8bbf202 to a2aa941 Compare December 3, 2020 10:51
Conor Schaefer added 2 commits December 3, 2020 18:00
Under Focal, we were writing the nameserver info to two (2) files, but
only testing one of them. Using a vars-based approach now, and the test
logic now looks in the correct spot for Focal.
Same as we've done for ossec-server, let's make sure that ossec-agent is
also managed via systemd when running under Focal.
Copy link
Contributor

@conorsch conorsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking solid! Banged on the resolvconf settings, and the approach you took to use a flat file in /etc/resolv.conf is definitely the best choice, among many (https://manpages.ubuntu.com/manpages/focal/man8/systemd-resolved.8.html). Overall the diff on this PR is rather small, given how many issues it closes 🙂

I've got just one remaining request prior to merge: given the sysv -> systemd transition for ossec-server, let's also make the same update for ossec-agent, so we can reuse the same bootstrapping logic and even reuse the same tests to verify state on both machines. I took the liberty of pushing a new commit which copies over the ossec-server test to ossec-agent, which should cause CI to fail again ☹️ . Once that change is made in the packaging logic, and CI is green again, no further requests from me, we're good to go!

when: ansible_distribution_release == 'focal'
tags:
- dns
- hardening
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a commit to consolidate the logic here a bit: under Focal, the old /etc/resolvconf/resolve.conf.d/ path is no longer written to, and the tests now inspect the correct file based on distro.

@kushaldas
Copy link
Contributor Author

kushaldas commented Dec 4, 2020

I made the change to have ossec.service file for the ossec-agent package too. But, when the securedrop-ossec-agent package gets installed, in the post-install script, it is running to start the service ossec. That fails because /var/ossec/etc/client.keys is missing. It is same even if I try to start the service manually.

root@app-staging:/home/vagrant# /var/ossec/bin/ossec-control start
Starting OSSEC HIDS v3.6.0...
Started ossec-execd...
2020/12/04 13:12:22 ossec-agentd: INFO: Using notify time: 600 and max time to reconnect: 1800
2020/12/04 13:12:22 ossec-agentd(1402): ERROR: Authentication key file '/var/ossec/etc/client.keys' not found.
2020/12/04 13:12:22 ossec-agentd(1751): ERROR: File client.keys not found or empty.
2020/12/04 13:12:22 ossec-agentd(4109): ERROR: Unable to start without auth keys. Exiting.
ossec-agentd did not start
root@app-staging:/home/vagrant# /var/ossec/bin/ossec-agentd status
2020/12/04 13:12:30 ossec-agentd: INFO: Using notify time: 600 and max time to reconnect: 1800
2020/12/04 13:12:30 ossec-agentd(1402): ERROR: Authentication key file '/var/ossec/etc/client.keys' not found.
2020/12/04 13:12:30 ossec-agentd(1751): ERROR: File client.keys not found or empty.
2020/12/04 13:12:30 ossec-agentd(4109): ERROR: Unable to start without auth keys. Exiting.
root@app-staging:/home/vagrant# /var/ossec/bin/ossec-agentd
2020/12/04 13:12:33 ossec-agentd: INFO: Using notify time: 600 and max time to reconnect: 1800
2020/12/04 13:12:33 ossec-agentd(1402): ERROR: Authentication key file '/var/ossec/etc/client.keys' not found.
2020/12/04 13:12:33 ossec-agentd(1751): ERROR: File client.keys not found or empty.
2020/12/04 13:12:33 ossec-agentd(4109): ERROR: Unable to start without auth keys. Exiting.

@conorsch @emkll Can you please have a look and see if you can find the reason? I could not figure out why it was being able to start the service before. Still looking.

@kushaldas
Copy link
Contributor Author

The is the service file generated and used by the systemd-sysv-generator

# Automatically generated by systemd-sysv-generator

[Unit]
Documentation=man:systemd-sysv-generator(8)
SourcePath=/etc/init.d/ossec
Description=LSB: Start daemon at boot time
After=remote-fs.target

[Service]
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
SuccessExitStatus=5 6
ExecStart=/etc/init.d/ossec start
ExecStop=/etc/init.d/ossec stop

@kushaldas
Copy link
Contributor Author

Found the issue: Our /etc/init.d/ossec file has an "exit 0" line at the end https://github.com/freedomofpress/securedrop/blob/develop/install_files/ossec-agent/etc/init.d/ossec#L61 which is reason it worked till then. I will make full comment and update the PR with a new .service file for only the agent on Monday morning.

OSSEC server and agent requires two different service files.
Details at https://kushaldas.in/posts/story-of-debugging-exit-0.html
@zenmonkeykstop zenmonkeykstop self-assigned this Dec 7, 2020
Copy link
Contributor

@zenmonkeykstop zenmonkeykstop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Testinfra passes against focal and xenial scenarios with same number of pass/skip/xfail, no fails
  • OSSEC alerts flow (tested using test ossec alert in JI) in focal
  • OSSEC alerts continue to flow after app and mon reboot in focal

one nit re: string for GPG output in testinfra but I don't view it as a blocker.

pub 4096R/00F4AD77 2016-10-20 [expires: 2021-06-30]
Key fingerprint = 2224 5C81 E3BA EB41 38B3 6061 310F 5612 00F4 AD77
uid SecureDrop Release Signing Key"""
fpf_gpg_pub_key_info = "2224 5C81 E3BA EB41 38B3 6061 310F 5612 00F4 AD77"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having the full output seems more secure, especially if the testinfra tests are being used to verify a prod system.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output is different in both Xenial and Focal, that is why I took this path.

@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 85.32%. Comparing base (1d50b6e) to head (35ab7a6).
Report is 2302 commits behind head on develop.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #5638      +/-   ##
===========================================
- Coverage    85.40%   85.32%   -0.09%     
===========================================
  Files           50       50              
  Lines         3679     3679              
  Branches       460      460              
===========================================
- Hits          3142     3139       -3     
- Misses         438      440       +2     
- Partials        99      100       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

test_apparmor.py failure on Focal
6 participants