Skip to content

Commit

Permalink
Merge #19394
Browse files Browse the repository at this point in the history
19394: tests/rmutex: Drop output dump from README.md r=maribu a=maribu

### Contribution description

We use test scripts to automatically classify the output of a test application as passing / failing. Users are not expected to manually compare the output with a dump of the output in a readme.

### Testing procedure

Doesn't apply

### Issues/PRs references

Fixes #19140

Fixes #19298

Co-authored-by: Marian Buschsieweke <marian.buschsieweke@ovgu.de>
  • Loading branch information
bors[bot] and maribu authored Mar 15, 2023
2 parents fd38db6 + e197abb commit 6894ee4
Showing 1 changed file with 1 addition and 74 deletions.
75 changes: 1 addition & 74 deletions tests/rmutex/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,80 +7,7 @@ should be able to lock (and unlock) the mutex first, followed by the
other threads in the order of their priority (highest next). If two
threads have the same priority the thread who comes first in the run queue
(in this test this is the one with the lower thread id) should acquire the
lock. The output should look like the following:

```
main(): This is RIOT! (Version: xxx)
Recursive Mutex test
Please refer to the README.md for more information
Recursive Mutex test
Please refer to the README.md for more information
T3 (prio 6, depth 0): trying to lock rmutex now
T4 (prio 4, depth 0): trying to lock rmutex now
T5 (prio 5, depth 0): trying to lock rmutex now
T6 (prio 2, depth 0): trying to lock rmutex now
T7 (prio 3, depth 0): trying to lock rmutex now
main: unlocking recursive mutex
T6 (prio 2, depth 0): locked rmutex now
T6 (prio 2, depth 1): trying to lock rmutex now
T6 (prio 2, depth 1): locked rmutex now
T6 (prio 2, depth 2): trying to lock rmutex now
T6 (prio 2, depth 2): locked rmutex now
T6 (prio 2, depth 3): trying to lock rmutex now
T6 (prio 2, depth 3): locked rmutex now
T6 (prio 2, depth 3): unlocked rmutex
T6 (prio 2, depth 2): unlocked rmutex
T6 (prio 2, depth 1): unlocked rmutex
T6 (prio 2, depth 0): unlocked rmutex
T7 (prio 3, depth 0): locked rmutex now
T7 (prio 3, depth 1): trying to lock rmutex now
T7 (prio 3, depth 1): locked rmutex now
T7 (prio 3, depth 2): trying to lock rmutex now
T7 (prio 3, depth 2): locked rmutex now
T7 (prio 3, depth 3): trying to lock rmutex now
T7 (prio 3, depth 3): locked rmutex now
T7 (prio 3, depth 4): trying to lock rmutex now
T7 (prio 3, depth 4): locked rmutex now
T7 (prio 3, depth 4): unlocked rmutex
T7 (prio 3, depth 3): unlocked rmutex
T7 (prio 3, depth 2): unlocked rmutex
T7 (prio 3, depth 1): unlocked rmutex
T7 (prio 3, depth 0): unlocked rmutex
T4 (prio 4, depth 0): locked rmutex now
T4 (prio 4, depth 1): trying to lock rmutex now
T4 (prio 4, depth 1): locked rmutex now
T4 (prio 4, depth 2): trying to lock rmutex now
T4 (prio 4, depth 2): locked rmutex now
T4 (prio 4, depth 2): unlocked rmutex
T4 (prio 4, depth 1): unlocked rmutex
T4 (prio 4, depth 0): unlocked rmutex
T5 (prio 5, depth 0): locked rmutex now
T5 (prio 5, depth 1): trying to lock rmutex now
T5 (prio 5, depth 1): locked rmutex now
T5 (prio 5, depth 2): trying to lock rmutex now
T5 (prio 5, depth 2): locked rmutex now
T5 (prio 5, depth 2): unlocked rmutex
T5 (prio 5, depth 1): unlocked rmutex
T5 (prio 5, depth 0): unlocked rmutex
T3 (prio 6, depth 0): locked rmutex now
T3 (prio 6, depth 1): trying to lock rmutex now
T3 (prio 6, depth 1): locked rmutex now
T3 (prio 6, depth 2): trying to lock rmutex now
T3 (prio 6, depth 2): locked rmutex now
T3 (prio 6, depth 3): trying to lock rmutex now
T3 (prio 6, depth 3): locked rmutex now
T3 (prio 6, depth 4): trying to lock rmutex now
T3 (prio 6, depth 4): locked rmutex now
T3 (prio 6, depth 4): unlocked rmutex
T3 (prio 6, depth 3): unlocked rmutex
T3 (prio 6, depth 2): unlocked rmutex
T3 (prio 6, depth 1): unlocked rmutex
T3 (prio 6, depth 0): unlocked rmutex
Test END, check the order of priorities above.
```
lock.

Background
==========
Expand Down

0 comments on commit 6894ee4

Please sign in to comment.