This project contains playbooks and roles for deployment of Hyperledger fabric project on multiple instances. Everything is running inside docker-containers, and managed by docker-compose. Both kafka and solo orderers are supported.
Your machine should have:
- GNU/Linux operating system
- ansible 2.5.0+
You can find installation instructions on ansible website.
Provisioned nodes by ansible should have:
- Ubuntu 16.04 or CentOS 7
- python
- sudo access
Ansible will connect each specified node using ssh, and execute all necessary tasks to deploy network from scratch.
Creditionals, hosts ip-addresses, kafka-cluster (if needed), HL fabric channels should be defined in hosts.yml
for each host.
HL-specific parameters like services ports, chaincode name and version, project home should be defined at group_vars/all.yml
All templates located in templates
folder.
- artifact-templates - contains hyperledger fabric configuration files.
- docker-compose-templates - contains templates for docker.
- chaincode - contains chaincode source code, should be placed in go/$your_chaincode_name/$your_chaincode_name.go
www-client
folder should contain all files, which will be mapped by docker to document_root web server directory.
Please note, that by default it's only transferred to target hosts, so you need to edit docker-compose-templates/base.yaml
to map directory in docker container.
- common-packages - installs jq, python-pip, rsync and docker-python (python libs for docker) on target hosts.
- docker - installs docker and docker-compose on target hosts. Docker Ansible-galaxy role
- fabric - pulls all necessary Hyperledger-fabric Docker images.
Calls mentioned earlier roles, to install all necessary software and prepare thrget hosts for deployment.
Generates HL fabric and Docker-compose configuration-files from templates and transfers them to target hosts. (e.g. ./network.sh -m generate
in fabric-starter)
-
Deletes any existing configuration of HL-fabric and Docker-containers. (all described nodes, including localhost)
-
Templates all configs, specified in
group_vars/all.yml
:Docker config files:
1. base.yaml (base docker-compose config-file) 2. docker-compose-{{ org }}.yaml (final docker-compose config-file)
HL-fabric config files:
1. configtx.yaml 2. fabric-ca-server-{{ org }}.yml 3. cryptogen-{{ org }}.yaml
-
Transfers chaincode source-code from
templates/chaincode
-
Generates crypto material with HL cryptogen tool, using container
cliNoCryptoVolume
and configuration filecryptogen-{{ org }}.yaml
. -
Links generated CA private keys in configuration files.
-
Creates {{ org }}Config.json file with configtxgen-tools using container
cliNoCryptoVolume
. -
Synchronizes all generated configuration files between all nodes using rsync ansible module.
-
Templates orderer config files:
Docker:
1. docker-compose-{{ domain }}.yaml
HL-fabric:
1. cryptogen-{{ domain }}.yaml
-
Generates crypto material with HL cryptogen tool, using container
cli
and configuration filecryptogen-{{ domain }}.yaml
. -
Generates orderer genesis block with configtxgen tool, using container
cli
. -
Generates config transaction (.tx) files for
common
and all specified inhosts.yml
channels with configtxgen tool, using containercli
. -
Synchronizes all configuration files generated by orderer to other nodes.
Creates all channels and starts blockchain network. (e.g. ./network.sh -m up
in fabric-starter)
- Synchronizes www-client content to all nodes, this will be mapped to
api
container by docker, to serve web-client application. - Synchronizes
docker-compose-zookeeper.yaml
anddocker-compose-kafka-broker.yaml
(if specified). - Starts kafka-cluster according configuration, with
docker-compose-zookeeper.yaml
anddocker-compose-kafka-broker.yaml
(if specified). - Starts orderer docker conainers with docker-compose and
docker-compose-{{ domain }}.yaml
config file:- orderer{{ orderer_id | default() }}.{{ domain }} (fabric orderer service)
- cli.{{ domain }} (fabric command line interface)
- Starts peer docker conainers with docker-compose and
docker-compose-{{ org }}.yaml
config file:- ca.{{ org }}.{{ domain }} (fabric ca)
- peer0.{{ org }}.{{ domain }} and peer1.{{ org }}.{{ domain }} (fabric peers)
- api.{{ org }}.domain (web-client interface)
- cli.{{ domain }} (fabric command line interface)
- cli.{{ org }}.{{ domain }} (fabric command line interface)
- cliNoCryptoVolume.{{ org }}.{{ domain }} (fabric command line interface)
- Creates shell scripts for manual network start and stop.
- Installs chaincode to common and all other specified channels, using
cli
container. - Creates common and all other specified channels, using
cli
container of root_peer. - Synchronizes all configuration files generated by orderer to other nodes.
- Joins root_peer to common and all other specified channels, using
cli
container. - Instantiating chaincode on common and all other specified channels, using
cli
container of root_peer. - Joins all other nodes to common and all other specified channels, using their
cli
container.
node_roles
is special configuration variable of array type to simplify understanding of what type of containers each host will serve.
Depending on a "role", each node will receive it's specific docker containers and configuration.
All possible node roles:
- orderer - node will run orderer service
- root_orderer - this orderer will be used to generate configs for other orderers (in case of kafka multi-orderer setup, one role per network)
- kafka_broker - node will run a kafka-broker for kafka-cluster (optional)
- zookeeper - node will run a zookeeper instance for kafka-cluser (optional)
- root_peer - node will be used to create all channels and instantiate chaincode for the whole network (one role per network)
- peer - node will host peers and api container for specified organization
git clone https://github.com/Altoros/Ansible-Fabric-Starter.git
You can find three example-configurations:
-
hosts_compact.yml - Solo orderer, 3 organizations, orderer-service is hosted on first's organization node. Only common channel is enabled.
-
hosts_dedicated_orderer.yml - Solo orderer, 3 organizations, orderer-service is hosted on separated node. 3 private channels between all organizations.
-
hosts_kafka.yml - Kafka orderer, 3 organizations, each organization has own copy of orderer-service. 3 private channels between all organizations.
If blockchain network architecture is pre-configured (or you may want to run default settings),
you just need to specify ip-address of each host in ansible_host
, user with sudo access in ansible_user
.
domain
and machine domain-name (e.g. one.example.com
) is mainly required for docker network, so you can set values you need.
Let's describe the most complicated example of hosts_kafka.yml
configuration:
all:
hosts:
localhost: # localhost connection parameters, used for storing configuration while transferring it between nodes
ansible_connection: local
vars:
domain: example.com
additional_channels: # optional, common channels are created by default. Just comment it out, if you don't need additional channels.
- name: a-b # channel name
particapants: # Organizations, should be included in channel
- a
- c
- name: a-c
particapants:
- a
- c
kafka_orderer: true # Enable kafka orderer, we'll have 4 brokers and 3 zookeepers.
orderer_count: 3 # Amount of orderers in network, assumed that it equals to amount of organization, so each org will have an own orderer copy
kafka_replicas: 2 # Set kafka_replicas parameter
kafka_replication_factor: 3 # Set kafka_replication_factor parameter (https://hyperledger-fabric.readthedocs.io/en/release-1.2/kafka.html)
children:
nodes:
hosts:
kafka.example.com: # Describes which containers will run on this node
node_roles:
- zookeeper # Apache zookeeper instance
- kafka_broker # Apache kafka instance
org: kafka # Organization name
zookeeper_id: 0 # ID for zookeeper
kafka_broker_id: 0 # ID for kafka-broker
ansible_host: 172.16.16.1 # Real ip address or domain name of the machine
ansible_user: username # User with sudo access
ansible_private_key_file: ~/path-to-private-key # Private key to identify ourselves
ansible_ssh_port: 22 # Specify ssh-port here, if case of it's not defaulted.
# Same structure for any other nodes
a.example.com:
node_roles:
- root_orderer # This node will be used to generate crypto-config for other orderers
- orderer # This node will host an orderer-service
- peer # This node will host peers and api containers for organization
- root_peer # This node will be used to create channels and instantiate chaincode
- zookeeper # Hosts zookeeper container for kafka-cluster
- kafka_broker # Hosts broker container for kafka-cluster
org: a
orderer_id: 0 # ID of orderer-service which is running on this host
zookeeper_id: 1
kafka_broker_id: 1
ansible_host: 172.16.16.2
ansible_user: username
ansible_private_key_file: ~/path-to-private-key
ansible_ssh_port: 22
b.example.com:
node_roles:
- orderer
- peer
- zookeeper
- kafka_broker
org: b
orderer_id: 1
zookeeper_id: 2
kafka_broker_id: 2
ansible_host: 172.16.16.3
ansible_user: username
ansible_private_key_file: ~/path-to-private-key
ansible_ssh_port: 22
c.example.com: # This node will host only kafka-broker and peer.
node_roles:
- peer
- orderer
- kafka_broker
org: c
orderer_id: 2
kafka_broker_id: 3
ansible_host: 172.16.16.4
ansible_user: username
ansible_private_key_file: ~/path-to-private-key
ansible_ssh_port: 22
Feel free, to fulfill each host with any ansible-related connection details you need, like ansible_private_key_file
. You can read about ansible inventory here.
If you need specific ports, or chaincode parameters for each node you can add variables from group_vars/all.yml
to each host directly.
Please note, that every new deployment configured to delete existing docker volumes and containers. That means, if you redeploy on working system, all data will be lost.
First, insure that you are in project-root directory:
cd ansible-fabric-starter
By default ansible inventory is located in hosts.yml file. You can rename any configuration from example, or specify correct inventory via -i
parameter.
If deployment is performed for the first time, you may want to install all dependencies like docker etc.:
ansible-playbook install-dependencies.yml -i hosts_kafka.yml
Or if you'd like to keep your inventory configuration in hosts.yml
:
ansible-playbook install-dependencies.yml
After all nodes provisioned with all necessary software, we can configure and start our blockchain network:
ansible-playbook config-network.yml -i hosts_kafka.yml
hint: config-network.yml
will include start-network.yml
automatically.
If you'd like to redeploy network without reconfiguration, to drop the ledger for example, just launch start-network.yml
(don't forget inventory configuration).
ansible-playbook start-network.yml -i hosts_kafka.yml
After successful deployments you can use testing scripts, which will invoke chaincode on peers via cli containers.
- test.yml will invoke chaincode in common channel.
- test_bilateral.yml will invoke chaincode in bilateral channel.
All test data should specified in 'set_facts' task of test.yml
and test_bilateral.yml
playbooks:
- name: Set chaincode invoke content
set_fact:
chaincode_update_json: '{"Args":["move", "a", "b", "10"]}' #cc invoke args
chaincode_query_json: '{"Args":["query", "a"]}'
invoke_channel_name: "common" # channel, where chaincode will be called
Launching test playbooks is like any other ansible playbooks:
ansible-playbook test.yml -i hosts_kafka.yml
Ansible-fabric-starter will create scripts for starting and stopping each node, because docker containers configured in a way, to start network at system start up. (restart_ploicy: always
)
So, if you'd like to turn network off and on, there are shell scrips
start-network.sh
stop-network.sh
They are generated in fabric_starter_workdir
, defined in group_vars/all.yml
.
- Each playbook contains cleaning tasks for docker containers. If there are nothing to clean, cleaning tasks may be failing, that should be ignored.
Feel free to ask me any questions at:
- E-mail:
hleb.ioda@altoros.com