NEAR Indexer for Explorer is built on top of NEAR Lake Framework to watch the network and store all the events in the PostgreSQL database.
NEAR runs the indexer and maintains it for NEAR Explorer, NEAR Wallet, and some other internal services. It proved to be a great source of data for various analysis and services, so we decided to give a shared read-only public access to the data:
- testnet credentials:
postgres://public_readonly:nearprotocol@testnet.db.explorer.indexer.near.dev/testnet_explorer
- mainnet credentials:
postgres://public_readonly:nearprotocol@mainnet.db.explorer.indexer.near.dev/mainnet_explorer
WARNING: We may evolve the data schemas, so make sure you follow the release notes of this repository.
NOTE: Please, keep in mind that the access to the database is shared across everyone in the world, so it is better to make sure you limit the amount of queries and individual queries are efficient.
The final setup consists of the following components:
- PostgreSQL database (you can run it locally or in the cloud), which can hold the whole history of the blockchain (as of August 2022, mainnet takes 3TB of data in PostgreSQL storage, and testnet takes 1TB)
- NEAR Indexer for Explorer binary that operates as a NEAR Lake Framework based indexer, it requires AWS S3 credentials
Before you proceed, make sure you have the following software installed:
-
Rust compiler of the version that is mentioned in
rust-toolchain
file in the root of nearcore project. -
libpq-dev
dependencyOn Debian/Ubuntu:
$ sudo apt install libpq-dev
Setup PostgreSQL database, create a database with the regular tools, and note the connection string (database host, credentials, and the database name).
Clone this repository and open the project folder
$ git clone https://github.com/near/near-indexer-for-explorer.git
$ cd near-indexer-for-explorer
You need to provide credentials via .env
file for:
-
database
(replace
user
,password
,host
anddb_name
with yours)$ echo "DATABASE_URL=postgres://user:password@host/db_name" > .env
-
AWS S3 (permission to read from buckets):
$ echo "AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE" >> .env $ echo "AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" >> .env
Then you need to apply migrations to create necessary database structure. For this you'll need diesel-cli
, you can install it like so:
$ cargo install diesel_cli --no-default-features --features "postgres"
And apply migrations
$ cd database && diesel migration run
If you have the DB with some data collected, and you need to apply the next migration, we highly recommend to read the migration contents.
Some migrations have the explanations what should be done, e.g. [1], [2], [3].
General advice is to add CONCURRENTLY
option to all indexes creation and apply such changes manually.
$ cargo build --release
Command to run NEAR Indexer for Explorer have to include the chain-id and start options:
You can choose NEAR Indexer for Explorer start options:
from-latest
- start indexing blocks from the latest finalized blockfrom-interruption
- start indexing blocks from the block NEAR Indexer was interrupted last time but earlier for<number_of_blocks>
if providedfrom-genesis
- download and store accounts/access keys in genesis file and start indexing from the genesis blockfrom-block --height <block_height>
- start indexing blocks from the specific block height
When starting Indexer for Explorer with from-genesis
, the entire genesis file will be loaded in to memory before iterating the stored accounts/access keys. As of writing this, mainnet
and betanet
both have relatively small genesis files (<1GB), but the testnet
file size is around 5GB. Therefore, if you intend to store the testnet
genesis records, make sure that your system has sufficient RAM to hande the memory load.
NEAR Indexer for Explorer works in strict mode by default. In strict mode, the Indexer will ensure parent data exists before storing children, infinitely retrying until this condition is met. This is necessary as a parent (i.e. block
) may still be processing while a child (i.e. receipt
) is ready to be stored. This scenario will likely occur if you have not stored the genesis file or do not have all data prior to the block you start indexing from. In this case, you can disable strict mode to store data prior to the block you are concerned about, and then re-enable it once you have passed this block.
To disable strict mode provide the following command arugment:
--non-strict-mode
By default NEAR Indexer for Explorer processes only a single block at a time. You can adjust this with the --concurrency
argument (when the blocks are mostly empty, it is fine to go with as many as 100 blocks of concurrency).
So final command to run NEAR Indexer for Explorer can look like:
$ ./target/release/indexer-explorer \
--non-strict-mode \
--concurrency 1 \
mainnet \
from-latest
After the network is synced, you should see logs of every block height currently received by NEAR Indexer for Explorer.
Refer to a separate TROBLESHOOTING.md document.
We highly recommend using a separate read-only user to access the data to avoid unexcepted corruption of the indexed data.
We use public
schema for all tables. By default, new users have the possibility to create new tables/views/etc there. If you want to restrict that, you have to revoke these rights:
REVOKE CREATE ON SCHEMA PUBLIC FROM PUBLIC;
REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA PUBLIC FROM PUBLIC;
ALTER DEFAULT PRIVILEGES IN SCHEMA PUBLIC GRANT SELECT ON TABLES TO PUBLIC;
After that, you could create read-only user in PostgreSQL:
CREATE ROLE readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public to readonly;
-- Put here your limit or just ignore this command
ALTER ROLE readonly SET statement_timeout = '30s';
CREATE USER explorer with login password 'password';
GRANT readonly TO explorer;
$ PGPASSWORD="password" psql -h 127.0.0.1 -U explorer databasename
Both indexer-explorer
and circulating-supply
binaries are run within Docker, their Dockerfile
s can be found within their respective directories/workspaces. Docker images are built using Google Cloud Build and then deployed to Google Cloud Run. The following commands can be used to build the Docker images:
$ docker build -f ./indexer/Dockerfile .
$ docker build -f ./circulating-supply/Dockerfile .
The tables account_changes
and/or assets__fungible_token_events
can be still enabled by features on the compile stage:
cargo build --release --features "account_changes fungible_token_events"
Note, we no longer support these tables. We highly recommend you to use Enhanced API instead.