Skip to content

Commit

Permalink
feat(zk_toolbox): Migrate db during update (#2995)
Browse files Browse the repository at this point in the history
## What ❔

Update databases during update process, now it's a universal receipt how
to pull changes
## Why ❔

<!-- Why are these changes done? What goal do they contribute to? What
are the principles behind them? -->
<!-- Example: PR templates ensure PR reviewers, observers, and future
iterators are in context about the evolution of repos. -->

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [ ] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk_supervisor fmt` and `zk_supervisor
lint`.

Signed-off-by: Danil <deniallugo@gmail.com>
  • Loading branch information
Deniallugo authored Oct 2, 2024
1 parent f57719c commit eed8198
Show file tree
Hide file tree
Showing 6 changed files with 44 additions and 25 deletions.
10 changes: 6 additions & 4 deletions docs/guides/external-node/00_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,14 @@ The HTTP JSON-RPC API can be accessed on port `3060` and WebSocket API can be ac

> [!NOTE]
>
> Those are requirements for nodes that use snapshots recovery and history pruning (the default for docker-compose setup).
> Those are requirements for nodes that use snapshots recovery and history pruning (the default for docker-compose
> setup).
>
> For requirements for nodes running from DB dump see the [running](03_running.md) section. DB dumps are a way to start ZKsync node with full historical transactions history.
> For requirements for nodes running from DB dump see the [running](03_running.md) section. DB dumps are a way to start
> ZKsync node with full historical transactions history.
>
> For nodes with pruning disabled, expect the storage requirements on mainnet to grow at 1TB per month. If you want to stop historical DB
> pruning you can read more about this in the [pruning](08_pruning.md) section.
> For nodes with pruning disabled, expect the storage requirements on mainnet to grow at 1TB per month. If you want to
> stop historical DB pruning you can read more about this in the [pruning](08_pruning.md) section.
- 32 GB of RAM and a relatively modern CPU
- 50 GB of storage for testnet nodes
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/external-node/01_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ This documentation explains the basics of the ZKsync Node.
## What is the ZKsync node

The ZKsync node is a read-replica of the main (centralized) node that can be run by external parties. It functions by
receiving blocks from the ZKsync network and re-applying transactions locally, starting from the genesis block. The ZKsync node
shares most of its codebase with the main node. Consequently, when it re-applies transactions, it does so exactly as the
main node did in the past.
receiving blocks from the ZKsync network and re-applying transactions locally, starting from the genesis block. The
ZKsync node shares most of its codebase with the main node. Consequently, when it re-applies transactions, it does so
exactly as the main node did in the past.

**It has two modes of initialization:**

Expand Down
1 change: 0 additions & 1 deletion docs/guides/external-node/04_observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,5 @@ memory leaking.
| `api_web3_call` | Histogram | `method` | Duration of Web3 API calls |
| `sql_connection_acquire` | Histogram | - | Time to get an SQL connection from the connection pool |


Metrics can be used to detect anomalies in configuration, which is described in more detail in the
[next section](05_troubleshooting.md).
10 changes: 5 additions & 5 deletions docs/guides/external-node/07_snapshots_recovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Instead of initializing a node using a Postgres dump, it's possible to configure a node to recover from a protocol-level
snapshot. This process is much faster and requires much less storage. Postgres database of a mainnet node recovered from
a snapshot is less than 500GB. Note that without [pruning](08_pruning.md) enabled, the node state will continuously
grow at a rate about 15GB per day.
a snapshot is less than 500GB. Note that without [pruning](08_pruning.md) enabled, the node state will continuously grow
at a rate about 15GB per day.

## How it works

Expand Down Expand Up @@ -94,6 +94,6 @@ An example of snapshot recovery logs during the first node start:

Recovery logic also exports some metrics, the main of which are as follows:

| Metric name | Type | Labels | Description |
| ------------------------------------------------------- | --------- | ------------ | --------------------------------------------------------------------- |
| `snapshots_applier_storage_logs_chunks_left_to_process` | Gauge | - | Number of storage log chunks left to process during Postgres recovery |
| Metric name | Type | Labels | Description |
| ------------------------------------------------------- | ----- | ------ | --------------------------------------------------------------------- |
| `snapshots_applier_storage_logs_chunks_left_to_process` | Gauge | - | Number of storage log chunks left to process during Postgres recovery |
40 changes: 29 additions & 11 deletions zk_toolbox/crates/zk_inception/src/commands/update.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,31 @@ use std::path::Path;

use anyhow::{Context, Ok};
use common::{
db::migrate_db,
git, logger,
spinner::Spinner,
yaml::{merge_yaml, ConfigDiff},
};
use config::{
ChainConfig, EcosystemConfig, CONTRACTS_FILE, EN_CONFIG_FILE, ERA_OBSERBAVILITY_DIR,
GENERAL_FILE, GENESIS_FILE, SECRETS_FILE,
traits::ReadConfigWithBasePath, ChainConfig, EcosystemConfig, CONTRACTS_FILE, EN_CONFIG_FILE,
ERA_OBSERBAVILITY_DIR, GENERAL_FILE, GENESIS_FILE, SECRETS_FILE,
};
use xshell::Shell;
use zksync_config::configs::Secrets;

use super::args::UpdateArgs;
use crate::messages::{
msg_diff_contracts_config, msg_diff_genesis_config, msg_diff_secrets, msg_updating_chain,
MSG_CHAIN_NOT_FOUND_ERR, MSG_DIFF_EN_CONFIG, MSG_DIFF_EN_GENERAL_CONFIG,
MSG_DIFF_GENERAL_CONFIG, MSG_PULLING_ZKSYNC_CODE_SPINNER,
MSG_UPDATING_ERA_OBSERVABILITY_SPINNER, MSG_UPDATING_SUBMODULES_SPINNER, MSG_UPDATING_ZKSYNC,
MSG_ZKSYNC_UPDATED,
use crate::{
consts::{PROVER_MIGRATIONS, SERVER_MIGRATIONS},
messages::{
msg_diff_contracts_config, msg_diff_genesis_config, msg_diff_secrets, msg_updating_chain,
MSG_CHAIN_NOT_FOUND_ERR, MSG_DIFF_EN_CONFIG, MSG_DIFF_EN_GENERAL_CONFIG,
MSG_DIFF_GENERAL_CONFIG, MSG_PULLING_ZKSYNC_CODE_SPINNER,
MSG_UPDATING_ERA_OBSERVABILITY_SPINNER, MSG_UPDATING_SUBMODULES_SPINNER,
MSG_UPDATING_ZKSYNC, MSG_ZKSYNC_UPDATED,
},
};

pub fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> {
pub async fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> {
logger::info(MSG_UPDATING_ZKSYNC);
let ecosystem = EcosystemConfig::from_file(shell)?;

Expand All @@ -48,7 +53,8 @@ pub fn run(shell: &Shell, args: UpdateArgs) -> anyhow::Result<()> {
&genesis_config_path,
&contracts_config_path,
&secrets_path,
)?;
)
.await?;
}

let path_to_era_observability = shell.current_dir().join(ERA_OBSERBAVILITY_DIR);
Expand Down Expand Up @@ -114,7 +120,7 @@ fn update_config(
Ok(())
}

fn update_chain(
async fn update_chain(
shell: &Shell,
chain: &ChainConfig,
general: &Path,
Expand Down Expand Up @@ -177,5 +183,17 @@ fn update_chain(
)?;
}

let secrets = Secrets::read_with_base_path(shell, secrets)?;

if let Some(db) = secrets.database {
if let Some(url) = db.server_url {
let path_to_migration = chain.link_to_code.join(SERVER_MIGRATIONS);
migrate_db(shell, path_to_migration, url.expose_url()).await?;
}
if let Some(url) = db.prover_url {
let path_to_migration = chain.link_to_code.join(PROVER_MIGRATIONS);
migrate_db(shell, path_to_migration, url.expose_url()).await?;
}
}
Ok(())
}
2 changes: 1 addition & 1 deletion zk_toolbox/crates/zk_inception/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ async fn run_subcommand(inception_args: Inception, shell: &Shell) -> anyhow::Res
InceptionSubcommands::Explorer(args) => commands::explorer::run(shell, args).await?,
InceptionSubcommands::Consensus(cmd) => cmd.run(shell).await?,
InceptionSubcommands::Portal => commands::portal::run(shell).await?,
InceptionSubcommands::Update(args) => commands::update::run(shell, args)?,
InceptionSubcommands::Update(args) => commands::update::run(shell, args).await?,
InceptionSubcommands::Markdown => {
clap_markdown::print_help_markdown::<Inception>();
}
Expand Down

0 comments on commit eed8198

Please sign in to comment.