Skip to content

w3f/1k-validators-be

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

CircleCI

Thousand Validators Program Backend

πŸ‘‹ Introduction

The Thousand Validators Programme is an initiative by Web3 Foundation and Parity Technologies to use the funds held by both organizations to nominate validators in the community.

How it Works

The nominating backend will routinely change its nominations at every era. The backend does this by short-listing candidates by validity and then sorts validators by their weighted score in descending order. Validators with a higher weighted score are selected for any possible slots. As validators are nominated and actively validate, their weighted scores decrease, allowing other validators to be selected in subsequent rounds of assessment. If a validator is active during a single nomination period (the time after a new nomination and before the next one) and does not break any of the requirements, it will have its rank increased by 1. Validators with higher rank have performed well within the program for a longer period of time. The backend nominates as many validators as it reasonably can in such a manner to allow each nominee an opportunity to be elected into the active set.

How to Apply

This Repo

A monorepo containing TypeScript microservices for the Thousand Validators Program.

The following is a monorepo of packages for the Thousand Validators Program.

The monorepo is managed using Yarn workspaces, and contains the following packages:

  • packages/common: A package containing common code shared across all microservices.
  • packages/core: A package containing the core logic of the Thousand Validators Program.
  • packages/gateway: A package for an API gateway that exposes the backend with a REST API.
  • packages/telemetry: A package for a telemetry client that monitors uptime

Installation & Setup

Instances

There's a few ways of running the backend with docker containers, either in kubernetes, or with docker-compose.

Current architecture:

Current Architecture

The following are different ways of running in either Current or Microservice architecture with either Kusama or Polkadot, and either Development or Production:

  • Kusama Current
    • Running as a monolith with production values
  • Polkadot Current
    • Running as a monolith with production values

Each package contains a Dockerfile, which is used for running in production, and Dockerfile-dev, which is used for development. The development images will use run with nodemon so that each time files is saved/changed it will rebuild the image and restart the container. Any changes for the regular run Dockerfile will need a manual rebuilding of the docker image.

The difference of running as either Current or Microservice is in which docker containers get run with docker-compose (Microservices have services separated out as their own containers, and additionally rely on Redis for messages queues). Outside of this everything else (whether it's run as a Kusama or Polkadot instance) is determined by the JSON configuration files that get generated.

Cloning the Repository

git clone https://github.com/w3f/1k-validators-be.git
cd 1k-validators-be

Installing System Dependencies

Ensure the following are installed on your machine:

Yarn Installation & Docker Scripts (All in One)

The following are scripts that can be run with yarn that will run all the required installations, config generations and build and run the docker containers. If these are not used you will need to do each separately in the following sections. First run:

yarn install Kusama Current / Monolith Production:

yarn docker:kusama-current:start

Kusama Current / Monolith Dev:

yarn docker:kusama-current-dev:start

Polkadot Current / Monolith Production:

yarn docker:polkadot-current:start

Polkadot Current / Monolith Dev:

yarn docker:polkadot-current-dev:start

Install Yarn Dependencies

yarn install

Building Node Packages

yarn build

Creating Configuration Files

Before running the microservices with docker-compose, you must create configuration files for each service to be run.

Kusama Current Config: This will create a configuration file for a Kusama instance that mirrors what is currently deployed. This runs the core service within one container, and includes the gateway and telemetry client, all run in the same node.js process.

yarn create-config-kusama-current

Polkadot Current Config: This will create a configuration file for a Kusama instance that mirrors what is currently deployed. This runs the core service within one container, and includes the gateway and telemetry client, all run in the same node.js process.

yarn create-config-polkadot-current

Viewing Logs

To view the aggregated logs of all the containers:

yarn docker:logs

or

docker compose logs -f       

To view the logs of an individual service:

Core:

yarn docker:logs:core

or

docker logs 1k-validators-be-1kv-core-1 -f

Gateway:

yarn docker:logs:gateway

or

docker logs 1k-validators-be-1kv-gateway-1 -f   

Telemetry:

yarn docker:logs:telemetry

or

docker logs 1k-validators-be-1kv-telemetry-1 -f  

Worker:

yarn docker:logs:worker

or

docker logs 1k-validators-be-1kv-worker-1 -f  

Stopping Containers

To stop all containers:

yarn docker:stop

or

docker-compose down

Express REST API

When running as a monolith, the Express REST API is exposed on port 3300. When running as microservices, the Express REST API is exposed on port 3301.

You can then query an endpoint like /candidates by going to http://localhost:3300/candidates or http://localhost:3301/candidates in your browser.

Mongo Express (Database GUI)

To view the Mongo Express GUI to interact with the MongoDB Database, go to http://localhost:8888/ in your browser. Or run yarn open:mongo-express from the root directory.

πŸ“ Contribute

πŸ’‘ Help