Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower code coverage requirements #1389

Closed
yurishkuro opened this issue Feb 28, 2019 · 7 comments
Closed

Lower code coverage requirements #1389

yurishkuro opened this issue Feb 28, 2019 · 7 comments

Comments

@yurishkuro
Copy link
Member

yurishkuro commented Feb 28, 2019

Requirement - what kind of business use case are you trying to solve?

We want to merge some PRs (#760, #1311) that have very good test coverage, but not 100%, because some error checking code is very hard to test. For example, x509.SystemCertPool() returns an error when run on Windows, and we never test on Windows.

Problem - what in Jaeger blocks you from solving the requirement?

Current .codecov settings prevent merging PRs with less than 100% coverage.

Proposal - what do you suggest to solve the problem or improve the existing situation?

At a recent project video call we discussed that we may want to lower the target levels for PRs and the repo overall.

Any open questions to address

What are the new targets that we want to set? The PR target cannot be too aggressive because with small PRs one missed error check can lower the coverage significantly. Maybe PR=75%? The project target should stay reasonably high, e.g. 90% or even 95%.

@jpkrohling
Copy link
Contributor

Current .codecov settings prevent merging PRs with less than 100% coverage.

I shouldn't prevent. It does show a "red" (failed) icon on the checklist, but the PR should still be mergeable. For reference, this is what I see when a PR for the Operator has a low coverage:

image

What are the new targets that we want to set?

I think the goal should still be 100% as a rule, but we can use our human judgment and relax this requirement on a per-PR basis.

@pavolloffay
Copy link
Member

We could exclude problematic code from coverage. The current approach is not very nice though, it requires moving code to separate file and using build tags: https://stackoverflow.com/questions/25511076/ignore-code-blocks-in-golang-test-coverage-calculation

@yurishkuro
Copy link
Member Author

@jpkrohling even if red coverage does not block merging, once we merge something with less than 100% these sections will be always red for everyone, so I still think we'd need to adjust levels. If we can exclude certain specific lines explicitly without lowering the targets, it would be a better approach.

@yurishkuro
Copy link
Member Author

Had a look at SO link, but it just says to move untestable code to a file (we do that today for certain files already, eg model.pb.go). However, if we have an otherwise testable function that had an error check inside that we cannot simulate, then such approach doesn't really help. We could introduce certain comment style that will inform the build to exclude the next N lines from coverage calculation.

@pavolloffay
Copy link
Member

We could introduce certain comment style that will inform the build to exclude the next N lines from coverage calculation.

That would be perfect. I had only a quick look if there is something and found the exclusion via tags.

@yurishkuro
Copy link
Member Author

We could introduce certain comment style that will inform the build to exclude the next N lines from coverage calculation.

I looked around for any tools that allow that, didn't find any. I don't think it's very easy to achieve, especially if we want the comments/annotations to be recognized at the statement level, e.g.

if err != nil {  # noqa
    // do something
    return err
}

Here, the #noqa annotation applies to the whole if statement. In order to recognize it and manipulate coverprofile files accordingly, we'd need AST parsing.

@jpkrohling
Copy link
Contributor

I think we might be overthinking this. Let's start with the simplest solution that works, even if it's not the perfect one: remove the hard requirement from .codecov.yml. This is how we have in the Jaeger Operator and it works fine: PRs that decrease the coverage will fail the check, but you still can merge them. Otherwise, the check passes (ie, "as good as", or "better" coverage).

Examples:
As good as: jaegertracing/jaeger-operator#354
Better: jaegertracing/jaeger-operator#353
Worse: jaegertracing/jaeger-operator#324

IMO, the goal of having "100% coverage" isn't a good one in the first place. First, because we don't have 100% coverage today, no matter what the badge says (find . -name .nocover). Second, because 100% coverage doesn't mean that good testing is happening. It just means that all lines or conditions are being exercised. Good testing, including input/positive/negative/boundary checks, should be assessed during the code review. Code coverage should only be used as an early indication about the state of the tests for the PR, but not as a requirement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants