Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix high CPU usage of idle workers #136

Merged
merged 3 commits into from
May 30, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Refactor module structure, propagate errors in worker to service manager [#97](https://github.com/p2panda/aquadoggo/pull/97)
- Restructure storage modules and remove JSON RPC [#101](https://github.com/p2panda/aquadoggo/pull/101)
- Implement new methods required for replication defined by `EntryStore` trait [#102](https://github.com/p2panda/aquadoggo/pull/102)
- Implement SQL `OperationStore` [103](https://github.com/p2panda/aquadoggo/pull/103)
- Implement SQL `OperationStore` [#103](https://github.com/p2panda/aquadoggo/pull/103)
- GraphQL client API with endpoint for retrieving next entry arguments [#119](https://github.com/p2panda/aquadoggo/pull/119)
- GraphQL endpoint for publishing entries [#123](https://github.com/p2panda/aquadoggo/pull/132)

Expand All @@ -27,7 +27,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Improve `Signal` efficiency in `ServiceManager` [#95](https://github.com/p2panda/aquadoggo/pull/95)
- `EntryStore` improvements [#123](https://github.com/p2panda/aquadoggo/pull/123)
- Improvements for log and entry table layout [#124](https://github.com/p2panda/aquadoggo/issues/122)
- Update `StorageProvider` API after `p2panda-rs` changes [129](https://github.com/p2panda/aquadoggo/pull/129)
- Update `StorageProvider` API after `p2panda-rs` changes [#129](https://github.com/p2panda/aquadoggo/pull/129)

### Fixed

- Fix high CPU usage of idle workers [#136](https://github.com/p2panda/aquadoggo/pull/136)

## [0.2.0]

Expand Down
12 changes: 11 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion aquadoggo/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ async-graphql = "3.0.35"
async-graphql-axum = "3.0.35"
axum = "0.5.1"
bamboo-rs-core-ed25519-yasmf = "0.1.1"
crossbeam-queue = "0.3.5"
deadqueue = { version = "0.2.2", default-features = false, features = ["unlimited"] }
directories = "3.0.2"
envy = "0.4.2"
futures = "0.3.17"
Expand Down
79 changes: 37 additions & 42 deletions aquadoggo/src/materializer/worker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ use std::hash::Hash;
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::{Arc, Mutex};

use crossbeam_queue::SegQueue;
use deadqueue::unlimited::Queue;
use log::{error, info};
use tokio::sync::broadcast::error::RecvError;
use tokio::sync::broadcast::{channel, Sender};
Expand Down Expand Up @@ -131,7 +131,7 @@ where
input_index: Arc<Mutex<HashSet<IN>>>,

/// FIFO queue of all tasks for this worker pool.
queue: Arc<SegQueue<QueueItem<IN>>>,
queue: Arc<Queue<QueueItem<IN>>>,
}

impl<IN> WorkerManager<IN>
Expand All @@ -142,7 +142,7 @@ where
pub fn new() -> Self {
Self {
input_index: Arc::new(Mutex::new(HashSet::new())),
queue: Arc::new(SegQueue::new()),
queue: Arc::new(Queue::new()),
}
}
}
Expand Down Expand Up @@ -409,50 +409,45 @@ where
task::spawn(async move {
loop {
// Wait until there is a new task arriving in the queue
match queue.pop() {
Some(item) => {
// Take this task and do work ..
let result = work.call(context.clone(), item.input()).await;

// Remove input index from queue
match input_index.lock() {
Ok(mut index) => {
index.remove(&item.input());
}
Err(err) => {
error!("Error while locking input index: {}", err);
error_signal.trigger();
}
}
let item = queue.pop().await;

// Take this task and do work ..
let result = work.call(context.clone(), item.input()).await;

// Remove input index from queue
match input_index.lock() {
Ok(mut index) => {
index.remove(&item.input());
}
Err(err) => {
error!("Error while locking input index: {}", err);
error_signal.trigger();
}
}

// .. check the task result ..
match result {
Ok(Some(list)) => {
// Tasks succeeded and dispatches new, subsequent tasks
for task in list {
match tx.send(task) {
Err(err) => {
error!("Error while broadcasting task: {}", err);
error_signal.trigger();
}
_ => (),
}
// .. check the task result ..
match result {
Ok(Some(list)) => {
// Tasks succeeded and dispatches new, subsequent tasks
for task in list {
match tx.send(task) {
Err(err) => {
error!("Error while broadcasting task: {}", err);
error_signal.trigger();
}
_ => (),
}
Err(TaskError::Critical) => {
// Something really horrible happened, we need to crash!
error!("Critical error in task {:?}", item);
error_signal.trigger();
}
Err(TaskError::Failure) => {
// Silently fail .. maybe write something to the log or retry?
}
_ => (), // Task succeeded, but nothing to dispatch
}
}
// Call the waker to avoid async runtime starvation when this loop runs
// forever ..
None => task::yield_now().await,
Err(TaskError::Critical) => {
// Something really horrible happened, we need to crash!
error!("Critical error in task {:?}", item);
error_signal.trigger();
}
Err(TaskError::Failure) => {
// Silently fail .. maybe write something to the log or retry?
}
_ => (), // Task succeeded, but nothing to dispatch
}
}
});
Expand Down