Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reported code coverage of TravisCI/coveralls confusing (mixes app and unit test code coverage) #1282

Closed
aryoda opened this issue Sep 2, 2022 · 6 comments · Fixed by #1810
Assignees
Labels
Infrastructure Low relevant, but not urgent

Comments

@aryoda
Copy link
Contributor

aryoda commented Sep 2, 2022

I have measured the code coverage of the app code (GUI and backend) as described in #1279.

I found an app code coverage of about 33 % while the coverage badge at the backintime README.md start page claims 74 %.

33 % vs. 74 % is a huge difference and misleading IMHO.

Reason:

The travisCI/coveralls code coverage also considers the unit test source files when calculating the overall code coverage.
Since the unit tests are all executed (at nearly 100 %) to estimate the app code coverage the distort the coverage result.

Proposal:

The goal is of TravisCI/coveralls is mainly to estimate the test coverage of the app code itself.

So I propose to

  1. exclude the unit tests from the code coverage in TravisCI/coveralls
  2. add a make target to measure the code coverage of the unit tests separately.
    This is just required to find unit test code fragments that not executed eg. due programming errors.
    This make target should be used during development to check if the unit tests are executed as designed.
  3. optionally add a second coverage badge for the unit tests only (if this is possible at all)

See also the documentation of travisCI/coveralls:

https://coveralls-python.readthedocs.io/en/latest/

@aryoda aryoda changed the title Reported code coverage of travisCI/coveralls confusing (mixes app and unit test code coverage) Reported code coverage of TravisCI/coveralls confusing (mixes app and unit test code coverage) Sep 2, 2022
@aryoda
Copy link
Contributor Author

aryoda commented Sep 2, 2022

I think modifying the .coveragerc config file could work to exclude the unit tests from the overall reported coverage.

I suggest to add this line under in [report] section to ignore all test folders (containing the unit test source code files):

omit = */test/*

@emtiu Since

  • testing TravisCI/coveralls typically requires many roundtrips of pushing and fixing the config files
    I propose to do it directly in master or a temporary TravisCI testing branch
    (so I am out of the game but you have commit rights).
    I could also send a PR and observe the TravisCI/coveralls results but this a little bit "dangerous" since you may not merge it until it works...

@buhtz
Copy link
Member

buhtz commented Sep 4, 2022

Fact is that the BIT repo currently doesn't follow any (official) recommendations how a project folder should be layouted. This is a big topic we need to discuss later and at the dev-list.

But in context of code coverage. I vote for remove code coverage from make, from travis and from the readme. This info is useless at this points.

Why does Travis need that number? Does it give a red light when the number is under a threshold?
Why README? It is just bling bling useless info for the users like the 3d-rotating-@-sings or dancing-hamster GIFs in the 90s.

Each developer should run its own coverage on its local copy of the repo.

@aryoda
Copy link
Contributor Author

aryoda commented Sep 4, 2022

I vote for remove code coverage from make, from travis and from the readme.

This is a valid opinion and esp. the end user doesn't care much about code coverage or the "total coverage" magic number.

I personally (as developer) do heavily use code coverage to identify source code lines that are not yet covered by unit tests
so that I can add more unit tests step-by-step to improve the code stability and avoid regressions (broken functionality that worked before).

My usual workflow is to create the code coverage report with python3 -m coverage html, open it in the browser and drill down to the source code files to see the code lines that are not called by a single unit test - see the red parts in the picture (other coverage tools even show how often the line is called ;-)

Then I write a unit test that will call this untested part of the code.

So creating and using code coverage reports are essential for me.

Why does Travis need that number? Does it give a red light when the number is under a threshold?

Travis/Ci doesn't care but there could and should be a policy for each non-trival project
a) to achieve a minimal code coverage
b) that the code coverage may not drop but only grow (if you click on the coverage badge icon in the README.md you can see the history of code coverage which is perfect to track this policy)

Each developer should run its own coverage on its local copy of the repo.

That is exactly why I have added the make target "coverage" in my pull request
and removing (or throwing it away) would mean that

  • each developer must implement a personal script for creating the coverage and
  • the coverage value may be measured differently (bad for defining a policy like "code coverage must be at least at 75 %)

So I am asking to keep the code coverage things in the project and merge my pull request :-)

Coverage example

@buhtz
Copy link
Member

buhtz commented Sep 4, 2022

OK, you convinced me. We are on the same train about that topic.

Currently I don't care about that number because even the tests that are existing are not "good enough" for me to make me well sleeping at night. The tests are nearly undocumented and unreadable. You have to dive into it to understand it. Currently it feels to me that we don't have any tests. ;)

Beside the new Issue triage, the exploring of unittests and creating new ones would be one of my primary tasks; just to give me a better sleep.

In short: I'm paranoid when it comes to tests. ;)

@aryoda
Copy link
Contributor Author

aryoda commented Sep 4, 2022

exploring of unittests and creating new ones would be one of my primary tasks

Perfect, unit tests are a good way to understand existing code step-by-step and even become a better programmer.

I would use one test code file per source code file or perhaps even per "function/class under test".

The challenge is always to keep the test code and file names in sync with the class/function and file names "under test" after every renaming (eg. during refactoring).

@buhtz
Copy link
Member

buhtz commented Oct 4, 2023

Becomes invalid if #1463 is solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Infrastructure Low relevant, but not urgent
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants