Learning project to understand how to implement the bully algorithm and a distributed mutex with docker containers
This project implements the bully algorithm as well as a distributed mutex with docker containers
Several containers are served, each of which is accessible over a REST API
For more information about that, take a look at the code comments and the swagger documentation task swagger
.
Details about the implementation of the Bully algorithm and the distributed mutex are shown below
- GO installation getting started - run project binary
- Docker installation getting started - be able to run docker containers
- Task installation doc - build tool Taskfile.yml
- Go Swagger installation doc - swagger api documentation
execute commands within the project root directory
task --list
task: Available tasks for this project:
* build: Build docker container
* run: Start docker container
* sdown: Stop docker-compose scenario
* sup: Start docker-compose scenario
* swagger: Generate swagger.yml and start local server
* update: Update project dependencies
// run listed commands
task <task>
// like
task build
docker stop $(docker ps -a -q --filter ancestor=leonardpahlke/gobully:latest --format="{{.ID}}")
- docker container as user in the network to run the bully algorithm
- bully algorithm scenario with docker-compose simulated
- detailed swagger documentation Swagger yml with go-swagger
── goBully
├── api
│ └── swagger.yml // swagger api dcumentation
├── assets
│ └── goBully.jpg // pictures and stuff
├── cmd
│ └── main.go // starting point of the application
├── internal
│ ├── election
│ │ ├── election.go // election private functions
│ │ └── election_client.go // election public functions
│ ├── identity
│ │ ├── register.go // user register workflow
│ │ └── user.go // user definition
│ ├── api
│ │ ├── doc.go // rest general documentation info
│ │ ├── rest_client.go // api setup
│ │ ├── rest_election.go // election rest endpoints
│ │ ├── rest_mutex.go // mutex rest endpoints
│ │ └── rest_user.go // user rest endpoints
│ └── mutex
│ ├── mutex.go // mutex private functions
│ └── mutex_client.go // mutex public functions
├── pkg
│ └── request.go // rest http calls
├── .gitignore
├── docker-compose.yml // docker-compose szenario
├── Dockerfile // docker container script
├── Taskfile.yml // build scripts
├── go.mod // go module information
├── go.sum // go module libary imports
└── README.md
Scenario info:
- user1 -> null
- user2 -> user1
- user3 -> user1
If a user connects to another (register to network), new user information gets send to all network participants
internal/election/election.go
- receiveMessage() // get a message from a api (election, coordinator)
- receiveMessageElection() // handle incoming election message
- sendMessageElection() // send a election message to another user
- receiveMessageCoordinator() // set local coordinator reference with incoming details
- sendMessagesCoordinator() // send coordinator messages to other users
receiveMessageElection - election message received
- filter users to send election messages to (UserID > YourID)
- if |filtered users| <= 0 2.1 YES: you have the highest ID and win - send coordinatorMessages - exit 2.2 NO: transform message and create POST payload 2.3 add user information to local callbackList 2.4 GO - sendElectionMessage(callbackResponse, msgPayload) 2.4.1 send POST request to client 2.4.2 if response is OK add client to client who have responded responded 2.5 wait a few seconds (enough time users can answer request) 2.6 Sort users who have called back and who are not 2.7 if |answered users| <= 0 2.7.1 YES: send coordinatorMessages 2.8 remove all users how didn't answered from userList 2.9 clear callback list
- send response back (answer)
It goes like this with 3 clients (A,B,C):
- Client A wants to enter the critical section
- A sends request with his clock to A,B,C
- B is currently
in
the critical section, does store the request - C is
idle
and sends reply-ok - A sends himself an reply-ok
- C wants to enter the critical section & sends request to A,B,C
- A
waits
for the mutex and his request has a lower clock, therefore stores the request - B is
in
the critical section, therefore stores the request - B finishes his critical section
- B sends reply-ok to the stored requests of A and C
- A got all required reply-ok and may now enter the critical section
- C still
waits
. - A has finished his critical section and sends reply-ok to the stored request of C
- C got all required reply-ok and may now enter the critical section
internal/mutex/**
requestCriticalArea - tell all users that this user wants to enter the critical section
- set state to 'wanting'
- increment clock, you are about to send mutex-messages
- create a request mutex-message
- create a response channel for every user (including yourself)
- create new object to manage responses of this request (containing all user response channels)
- add new requestResponseChannel to replyOkwaitingList
- GO - send all users the request mutex-message
- wait for all users to reply-ok to your request
- remove the waiting reponses object from the list
- enterCriticalSection()
sendRequestToUser - send request message to a user
- send POST to user and wait for reply-ok answer
- start checking if user answered
checkClientIfResponded - listen if client reply-ok'ed and check with him back if not
- GO - clientHealthCheck() - sends periodic beats to check whether the user has responded
- receiving message send through the channel
- if message is reply-ok, return
- ping user mutexState
- wait some time to get response back
- if answered: loopback to 2.
- remove user from waiting list
- delete user from local user management (inactive)