-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[iota-indexer
]: Optimize tests
#3492
Comments
Unfortunately, using a database per test is not working as expected. The Postgres DB’s max connections are 100. For every test, we create two connections, one for writing and one for reading. Even if we increase the max connections, the test execution times are worse than using I suppose this could be due to more resource-intensive operations when handling multiple databases at the same time and the overhead of dealing with multiple concurrent connections.
cargo nextest run --cargo-profile simulator -p iota-indexer --test rpc-tests read_api --features pg_integration --test-threads 1
cargo nextest run --cargo-profile simulator -p iota-indexer --test rpc-tests read_api --features pg_integration Due to this regression in performance seems a good choice to stick with the shared cluster. |
The following PR #3018 adds an ability to have more that one shared simulacrum cluster, and share those clusters between groups of tests We would need to introduce the same ability for standard clusters (based on While doing the above we should try to minimize the amount of DBs that are created by tests to not end up with huge amount of DBs after running the tests several times Bonus point to consider, not necessarily in scope of this issue:
|
@iotaledger/l1-core-infra I was thinking, using a trait to create as shared TestCluster or Simulacrum could be a good idea? // common.rs
static GLOBAL_API_TEST_SETUP: OnceLock<InitializedClusterEnv> = OnceLock::new();
pub trait SharedTestCluster {
/// Unique name of shared test runtime
fn name() -> &'static str;
async fn setup_test_cluster() -> TestCluster {
TestClusterBuilder::new().build().await
}
fn init() -> InitializedClusterEnv {
let runtime = tokio::runtime::Runtime::new().unwrap();
let db_name = format!("shared_cluster_env_db_{}", Self::name());
let (cluster, store, client) = runtime.block_on(async {
let cluster = Self::setup_test_cluster().await;
start_test_cluster_with_read_write_indexer_new(Some(&db_name), cluster).await
});
InitializedClusterEnv {
runtime,
cluster,
store,
client,
}
}
fn get_or_init() -> &'static InitializedClusterEnv;
}
pub trait SharedSimulacrum {
/// Unique name of shared simulacrum runtime
fn name() -> &'static str;
fn setup_simulacrum() -> Simulacrum;
fn init() -> InitializedSimulacrumEnv {
let runtime = tokio::runtime::Runtime::new().unwrap();
let sim = Arc::new(Self::setup_simulacrum());
let db_name = format!("simulacrum_env_db_{}", Self::name());
let (_, store, _, client) = runtime.block_on(
start_simulacrum_rest_api_with_read_write_indexer(sim.clone(), Some(&db_name)),
);
InitializedSimulacrumEnv {
runtime,
sim,
store,
client,
}
}
fn get_or_init() -> &'static InitializedSimulacrumEnv;
}
pub struct GlobalTestCluster;
impl SharedTestCluster for GlobalTestCluster {
fn name() -> &'static str {
"global_shared_cluster"
}
fn get_or_init() -> &'static InitializedClusterEnv {
GLOBAL_API_TEST_SETUP.get_or_init(|| Self::init())
}
} // extended_api.rs
// example of a custom simulacrum implementation
static EXTENDED_API_SIMULACRUM: OnceLock<InitializedSimulacrumEnv> =
OnceLock::new();
struct SimulacrumExtendedApi;
impl SharedSimulacrum for SimulacrumExtendedApi {
fn name() -> &'static str {
"extended_api"
}
fn setup_simulacrum() -> Simulacrum {
let mut sim = Simulacrum::new();
execute_simulacrum_transactions(&mut sim, 15);
add_checkpoints(&mut sim, 300);
sim.advance_epoch(false);
execute_simulacrum_transactions(&mut sim, 10);
add_checkpoints(&mut sim, 300);
sim.advance_epoch(false);
execute_simulacrum_transactions(&mut sim, 5);
add_checkpoints(&mut sim, 300);
sim
}
fn get_or_init() -> &'static InitializedSimulacrumEnv {
EXTENDED_API_SIMULACRUM.get_or_init(|| Self::init())
}
}
#[test]
fn get_epochs() {
let InitializedSimulacrumEnv {
runtime,
sim,
store,
client,
} = SimulacrumExtendedApi::get_or_init();
...
} // indexer_api.rs
// example of a custom test cluster implementation
pub struct IndexerApiTestCluster;
impl SharedTestCluster for IndexerApiTestCluster {
fn name() -> &'static str {
"indexer_runtime"
}
async fn setup_test_cluster() -> TestCluster {
TestClusterBuilder::new()
.with_epoch_duration_ms(1000)
.build()
.await
}
fn get_or_init() -> &'static InitializedClusterEnv {
INDEXER_RUNTIME.get_or_init(|| Self::init())
}
}
#[test]
fn query_events_no_events_descending() {
let InitializedClusterEnv {
runtime,
store,
client,
..
} = IndexerApiTestCluster::get_or_init();
...
} |
After the faucet approach was introduced in #3600 the need for more shared test clusters dropped significantly. Closing the issue since we don't really need it anymore. |
Description
After the investigation on optimizing the
iota-json-rpc-tests
seems like a good approach to do the same also for theiota-indexer
by reverting the singleton implementation and using#[sim_test]
macro.Indeed we loose a bit of the performance but we gain the flexibility to configure the
TestCluster
for each test case which is very important and quite difficult to have the same result using a Singleton.Also
cargo nextest
will work againTo solve the issue of multiple indexer instances we could use a database per test case, implemented in this PR
The text was updated successfully, but these errors were encountered: