Skip to content

Commit

Permalink
Improve documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
uhafner committed Dec 20, 2023
1 parent 6e40cb2 commit 82e4762
Show file tree
Hide file tree
Showing 6 changed files with 63 additions and 85 deletions.
148 changes: 63 additions & 85 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Autograding GitHub Action
# Quality Monitor GitHub Action

[![GitHub Actions](https://github.com/uhafner/autograding-github-action/workflows/CD/badge.svg)](https://github.com/uhafner/autograding-github-action/actions/workflows/cd.yml)
[![CodeQL](https://github.com/uhafner/autograding-github-action/workflows/CodeQL/badge.svg)](https://github.com/uhafner/autograding-github-action/actions/workflows/codeql.yml)
Expand All @@ -8,42 +8,33 @@
[![Style Warnings](https://raw.githubusercontent.com/uhafner/autograding-github-action/main/badges/style-warnings.svg)](https://github.com/uhafner/autograding-github-action/actions/workflows/dogfood.yml)
[![Potential Bugs](https://raw.githubusercontent.com/uhafner/autograding-github-action/main/badges/bugs.svg)](https://github.com/uhafner/autograding-github-action/actions/workflows/dogfood.yml)

This GitHub action autogrades projects based on a configurable set of metrics and gives feedback on pull requests (or single commits) in GitHub. I use this action to automatically grade student projects in my lectures at the Munich University of Applied Sciences.
This GitHub action monitors the quality of projects based on a configurable set of metrics and gives feedback on pull requests (or single commits) in GitHub. This action is a stand-alone version of my Jenkins [Warnings](https://github.com/jenkinsci/warnings-ng-plugin) and [Coverage](https://github.com/jenkinsci/coverage-plugin) plugins. It can be used in any GitHub project that uses GitHub Actions. A [similar action](https://github.com/uhafner/autograding-gitlab-action) is available for GitLab projects as well.

You can see the results of this action in an [example pull request](https://github.com/uhafner/autograding-github-action/pull/311) and the associated [GitHub Checks output](https://github.com/uhafner/autograding-github-action/runs/19411191545). Another real-live example is visible in the [pull request](https://github.com/uhafner/java2-assignment1/pull/31) and [checks result](https://github.com/uhafner/java2-assignment1/runs/19411468263) of a fake student project.

![Pull request comment](images/pr-comment.png)

Please have a look at my [companion coding style](https://github.com/uhafner/codingstyle) and [Maven parent POM](https://github.com/uhafner/codingstyle-pom) to see how to create Java projects that can be graded using this GitHub action. If you are hosting your project on GitLab, then you might be interested in my [identical GitLab action](https://github.com/uhafner/autograding-gitlab-action) as well.

Both actions are inspired by my Jenkins plugins:
- [Jenkins Warnings plugin](https://github.com/jenkinsci/warnings-ng-plugin)
- [Jenkins Coverage plugin](https://github.com/jenkinsci/coverage-plugin)
- [Jenkins Autograding plugin](https://github.com/jenkinsci/autograding-plugin)

They work in the same way but are much more powerful and flexible and show the results additionally in Jenkins' UI.

Please note that the action works on report files that are generated by other tools. It does not run the tests or static analysis tools itself. You need to run these tools in a previous step of your workflow. See the example below for details. This has the advantage that you can use a tooling you are already familiar with. So the action will run for any programming language that can generate the required report files. There are already more than [one hundred analysis formats](https://github.com/jenkinsci/analysis-model/blob/main/SUPPORTED-FORMATS.md) supported. Code and mutation coverage reports can use the JaCoCo, Cobertura and PIT formats, see the [coverage model](https:://github.com/jenkinsci/coverage-model) for details.
Please note that the action works on report files that are generated by other tools. It does not run the tests or static analysis tools itself. You need to run these tools in a previous step of your workflow. See the example below for details. This has the advantage that you can use a tooling you are already familiar with. So the action will run for any programming language that can generate the required report files. There are already more than [one hundred analysis formats](https://github.com/jenkinsci/analysis-model/blob/main/SUPPORTED-FORMATS.md) supported. Code and mutation coverage reports can use the JaCoCo, Cobertura and PIT formats, see the [coverage model](https:://github.com/jenkinsci/coverage-model) for details. Test results can be provided in the [JUnit XML format](https://maven.apache.org/surefire/maven-surefire-plugin/xsd/surefire-test-report.xsd).

# GitHub Checks

The details output of the action is shown in the GitHub Checks tab of the pull request:

![GitHub checks result](images/details.png)

# Configuration
# Howto

The individual metrics can be configured by defining an appropriate `CONFIG` property (in JSON format) in your GitHub workflow:
You can use this action in any GitHub project that uses GitHub Actions. The following example shows how to use this action with the default settings in a Java project that uses Maven as a build tool.

```yaml
name: Autograde project
name: 'Quality Monitor'

on:
push

jobs:
grade-project:
name: Autograde project
monitor-project-quality:
name: Run the quality monitor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
Expand All @@ -63,35 +54,43 @@ jobs:
- name: Extract pull request number # (commenting on the pull request requires the PR number)
uses: jwalton/gh-find-current-pr@v1
id: pr
- name: Run Autograding
uses: uhafner/autograding-github-action@v3
- name: Run Quality Monitor
uses: uhafner/quality-monitor@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pr-number: ${{ steps.pr.outputs.number }}
checks-name: "Autograding GitHub Action"
config: > # Override default configuration: just grade the test results
{
"tests": {
"tools": [
{
"id": "test",
"name": "Unittests",
"pattern": "**/target/*-reports/TEST*.xml"
}
],
"name": "JUnit",
"skippedImpact": -1,
"failureImpact": -5,
"maxScore": 100
}
}
```
Currently, you can select from the metrics shown in the following sections. Each metric can be configured individually. All of these configurations are composed in the same way: you can define a list of tools that are used to collect the data, a name and icon for the metric, and a maximum score. All tools need to provide a pattern where the autograding action can find the result files in the workspace (e.g., JUnit XML reports). Additionally, each tool needs to provide the parser ID of the tool so that the underlying model can find the correct parser to read the results. See [analysis model](https:://github.com/jenkinsci/analysis-model) and [coverage model](https:://github.com/jenkinsci/coverage-model) for the list of supported parsers.
# Configuration
The individual metrics can be configured by defining an appropriate `config` property (in JSON format) in your GitHub workflow:

```yaml
[...]
- name: Run Quality Monitor
uses: uhafner/quality-monitor@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pr-number: ${{ steps.pr.outputs.number }}
config: > # Override default configuration: just evaluate the test results
{
"tests": {
"name": "JUnit",
"tools": [
{
"id": "test",
"name": "Unittests",
"pattern": "**/target/*-reports/TEST*.xml"
}
]
}
}
[...]
```

Additionally, you can define the impact of each result (e.g., a failed test, a missed line in coverage) on the final score. The impact is a positive or negative number and will be multiplied with the actual value of the measured items during the evaluation. Negative values will be subtracted from the maximum score to compute the final score. Positive values will be directly used as the final score. You can choose the type of impact that matches your needs best.
Currently, you can select from the metrics shown in the following sections. Each metric can be configured individually. All of these configurations are composed in the same way: you can define a list of tools that are used to collect the data, and a name and icon for the metric. All tools need to provide a pattern where the quality monitor can find the result files in the workspace (e.g., JUnit XML reports). Additionally, each tool needs to provide the parser ID of the tool so that the underlying model can find the correct parser to read the results. See [analysis model](https:://github.com/jenkinsci/analysis-model) and [coverage model](https:://github.com/jenkinsci/coverage-model) for the list of supported parsers.

## Test statistics (e.g., number of failed tests)
## Test statistics

![Test statistics](images/tests.png)

Expand All @@ -100,25 +99,21 @@ This metric can be configured using a JSON object `tests`, see the example below
```json
{
"tests": {
"name": "JUnit",
"tools": [
{
"id": "test",
"name": "Unittests",
"pattern": "**/junit*.xml"
}
],
"name": "JUnit",
"passedImpact": 10,
"skippedImpact": -1,
"failureImpact": -5,
"maxScore": 100
]
}
}
```

You can either count passed tests as positive impact or failed tests as negative impact (or use a mix of both). For failed tests, the test error message and stack trace will be shown directly after the summary in the pull request.
Skipped tests will be listed individually. For failed tests, the test error message and stack trace will be shown directly after the summary in the pull request.

## Code or mutation coverage (e.g., line coverage percentage)
## Code or mutation coverage

![Code coverage summary](images/coverage.png)

Expand All @@ -128,6 +123,7 @@ This metric can be configured using a JSON object `coverage`, see the example be
{
"coverage": [
{
"name": "JaCoCo",
"tools": [
{
"id": "jacoco",
Expand All @@ -143,13 +139,10 @@ This metric can be configured using a JSON object `coverage`, see the example be
"sourcePath": "src/main/java",
"pattern": "**/jacoco.xml"
}
],
"name": "JaCoCo",
"maxScore": 100,
"coveredPercentageImpact": 1,
"missedPercentageImpact": -1
]
},
{
"name": "PIT",
"tools": [
{
"id": "pit",
Expand All @@ -158,18 +151,12 @@ This metric can be configured using a JSON object `coverage`, see the example be
"sourcePath": "src/main/java",
"pattern": "**/mutations.xml"
}
],
"name": "PIT",
"maxScore": 100,
"coveredPercentageImpact": 1,
"missedPercentageImpact": 0
]
}
]
}
```

You can either use the covered percentage as positive impact or the missed percentage as negative impact (a mix of both makes little sense but would work as well). Please make sure to define exactly a unique and supported metric for each tool. For example, JaCoCo provides `line` and `branch` coverage, so you need to define two tools for JaCoCo. PIT provides mutation coverage, so you need to define a tool for PIT that uses the metric `mutation`.

Missed lines or branches as well as survived mutations will be shown as annotations in the pull request:

![Code coverage annotations](images/coverage-annotations.png)
Expand All @@ -180,7 +167,7 @@ Missed lines or branches as well as survived mutations will be shown as annotati

![Static analysis](images/analysis.png)

This metric can be configured using a JSON object `analysis`, see the example above for details:
This metric can be configured using a JSON object `analysis`, see the example below for details:

```json
{
Expand All @@ -199,12 +186,7 @@ This metric can be configured using a JSON object `analysis`, see the example ab
"name": "PMD",
"pattern": "**/target/pmd.xml"
}
],
"errorImpact": 1,
"highImpact": 2,
"normalImpact": 3,
"lowImpact": 4,
"maxScore": 100
]
},
{
"name": "Bugs",
Expand All @@ -217,19 +199,12 @@ This metric can be configured using a JSON object `analysis`, see the example ab
"sourcePath": "src/main/java",
"pattern": "**/target/spotbugsXml.xml"
}
],
"errorImpact": -11,
"highImpact": -12,
"normalImpact": -13,
"lowImpact": -14,
"maxScore": 100
]
}
]
}
```

Normally, you would only use a negative impact for this metric: each warning (of a given severity) will reduce the final score by the specified amount. You can define the impact of each severity level individually.

All warnings will be shown as annotations in the pull request:

![Warning annotations](images/analysis-annotations.png )
Expand All @@ -238,30 +213,28 @@ All warnings will be shown as annotations in the pull request:

This action can be configured using the following parameters (see example above):
- ``github-token: ${{ secrets.GITHUB_TOKEN }}``: mandatory GitHub access token.
- ``config: "{...}"``: optional configuration, see sections above for details, or consult the [autograding-model](https://github.com/uhafner/autograding-model) project for the exact implementation. If not specified, a [default configuration](https://github.com/uhafner/autograding-model/blob/main/src/main/resources/default-config.json) will be used.
- ``config: "{...}"``: optional configuration, see sections above for details, or consult the [autograding-model](https://github.com/uhafner/autograding-model) project for the exact implementation. If not specified, a [default configuration](https://raw.githubusercontent.com/uhafner/autograding-model/main/src/main/resources/default-no-score-config.json) will be used.
- ``pr-number: ${{ steps.pr.outputs.number }}``: optional number of the pull request. If not set, then just the checks will be published but not a pull request comment.
- ``checks-name: "Name of checks"``: optional name of GitHub checks (overwrites the default: "Autograding result").
- ``checks-name: "Name of checks"``: optional name of GitHub checks (overwrites the default: "Quality Monitor").
- ``skip-annotations: true``: Optional flag to skip the creation of annotations (for warnings and missed coverage).

## Pull Request Comments

The action writes a summary of the results to the pull request as well. Since the action cannot identify the correct pull request on its own, you need to provide the pull request as an action argument.

```yaml
[... ]
[...]
- name: Extract pull request number
uses: jwalton/gh-find-current-pr@v1
id: pr
- name: Run Autograding
uses: uhafner/autograding-github-action@v3
- name: Run Quality Monitor
uses: uhafner/quality-monitor@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pr-number: ${{ steps.pr.outputs.number }}
checks-name: "Autograding GitHub Action"
checks-name: "Quality Monitor GitHub Action"
config: {...}
[... ]
[...]
```

Configuring the action in this way will produce an additional comment of the form:
Expand All @@ -277,10 +250,15 @@ Configuring the action in this way will produce an additional comment of the for
[![Potential Bugs](https://raw.githubusercontent.com/uhafner/autograding-github-action/main/badges/bugs.svg)](https://github.com/uhafner/autograding-github-action/actions/workflows/dogfood.yml)


The results of the action can be used to create various badges that show the current status of the project. The action writes the results of the action to a file called `metrics.env` in the workspace. This file can be used to create badges using the [GitHub Badge Action](https://github.com/marketplace/actions/badge-action). The following snippet shows how to create severage badges for your project, the full example is visible in [my autograding workflow](https://raw.githubusercontent.com/uhafner/autograding-github-action/main/.github/workflows/dogfood.yml).
The results of the action can be used to create various badges that show the current status of the project. The action writes the results of the action to a file called `metrics.env` in the workspace. This file can be used to create badges using the [GitHub Badge Action](https://github.com/marketplace/actions/badge-action). The following snippet shows how to create several badges for your project, the full example is visible in [my autograding workflow](https://raw.githubusercontent.com/uhafner/autograding-github-action/main/.github/workflows/dogfood.yml).

```yaml
[... Autograding, see above ... ]
[...]
- name: Run Quality Monitor
uses: uhafner/quality-monitor@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pr-number: ${{ steps.pr.outputs.number }}
- name: Write metrics to GitHub output
id: metrics
run: |
Expand Down
Binary file modified images/analysis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/coverage.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/details.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/pr-comment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/tests.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 82e4762

Please sign in to comment.