-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes issue366 #367
Fixes issue366 #367
Conversation
Codecov Report
@@ Coverage Diff @@
## master #367 +/- ##
============================================
+ Coverage 52.05% 52.08% +0.02%
- Complexity 7150 7155 +5
============================================
Files 383 383
Lines 40270 40269 -1
Branches 6504 6506 +2
============================================
+ Hits 20964 20975 +11
+ Misses 17796 17782 -14
- Partials 1510 1512 +2
Continue to review full report at Codecov.
|
e9a30eb
to
467b35e
Compare
original unit-test idea courtesy Benjamin Peters (@dedeibel)
467b35e
to
eb3cac8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without having done a deep analysis of the locking, the logic looks correct. I ran the test on a 12/24 cpu with various combinations and could not make it fail.
The synchronized blocks are however worrying, and the performance is quite a bit worse. I uploaded a video of comparing this PR (left) to 11.2.5 (right):
The calculator shows how often the same plot is open on both versions. Each plot has 3x 6k points coming in at about ~640 Hz.
N.B. read(Un)Lock(..) still works but its use should be deprecated compared to the lock guards due to worse performance
…k(..) N.B. avoids dead-lock de.gsi.chart.samples.legacy.ChartHighUpdateRateSample is a good test-case to reproduce the write-race-condition and prove of the dead-lock.
@ennerf thanks for your follow-up. I could reproduce your observation using the example 'de.gsi.chart.samples.legacy.ChartHighUpdateRateSample' which updates the DataSet @1KHz (N.B. in-build Chart-fx rate-limiter limits this to <50Hz). replaced writeLock() internally with spinning StampedLock:tryWriteLock(..) the unit-tests and sample seem to work (for my part) but would be nice to see whether your use-case is still OK. N.B. avoids dead-lock |
The performance is maybe slightly better than with the first fix, but it's tough to tell visually. Still noticably worse than pre-fix. |
@ennerf based on the stress tests the max write-lock frequency is now around 2-3 kHz... In your example: how many writer/readers and update/read rates do you typically have? Presently -- apart from getting the Have to think about whether one can still use atomics and only use the stampedLock.readLock() only for the first/critical reader... |
1 writer + 1x UI thread reader w/ the concurrent rendering parts. The update rate for each plot is 1-4 x 0.1-1KHz. |
I'll give it another try with atomics-only ... |
modified read-lock and re-introduced atomic guards only the first reader acquires a lock, subsequent readers increase atomic counter, and only the last remaining reader unlocks. detects race-condition on initial lock and final unlock |
only the first reader acquires a lock, subsequent readers increase atomic counter, and only the last remaining reader unlocks. detects race-condition on initial lock and final unlock
private transient Thread writeLockedByThread; // NOPMD | ||
private final transient AtomicInteger readerCount = new AtomicInteger(0); | ||
private final transient AtomicInteger writerCount = new AtomicInteger(0); | ||
private final AtomicLong lastReadStamp = new AtomicLong(-1L); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these fields still be transient?
I observed no more test failures with this version and could not see of any loopholes at the moment. Thanks. |
No description provided.