Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement proposed batch size to be floor(B/N) #28

Open
sbellem opened this issue Aug 24, 2018 · 5 comments
Open

Implement proposed batch size to be floor(B/N) #28

sbellem opened this issue Aug 24, 2018 · 5 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@sbellem
Copy link
Collaborator

sbellem commented Aug 24, 2018

From @sbellem on August 26, 2017 0:57

@amiller I assume you are well aware of this as there's already a TODO note in the code about implementing the random selection.

Nevertheless, regardless of the size, only one element (tx_to_send[0]) is currently passed to _run_round():

class HoneyBadgerBFT():
    # ...
    def run(self):
        # ...
        while True:
            # ...
            # Select all the transactions (TODO: actual random selection)
            tx_to_send = self.transaction_buffer[:self.B]

            # Run the round
            send_r = _make_send(r)
            recv_r = self._per_round_recv[r].get
            new_tx = self._run_round(r, tx_to_send[0], send_r, recv_r)

So _run_round() and tpke.encrypt() should be capable to take a list or a similar data structure.

tpke.encrypt() would need to be modified so that a string (e.g.: cPickle.dumps(raw)) is passed to the padding operation.

So this issue could be done in two (or three) parts:

  1. Pass a list to _run_round(), e.g.: tx_to_send[:1]
  2. Pass floor(B/N) transactions to _run_round(), e.g.: tx_to_send[:int(B/N)]
  3. Implement the random selection

Copied from original issue: amiller/HoneyBadgerBFT#36

@sbellem
Copy link
Collaborator Author

sbellem commented Aug 24, 2018

From @amiller on September 1, 2017 19:0

We're already doing a version of this in the demo3 branch but it was made in kind of a rush amiller/HoneyBadgerBFT@6c3f710

@sbellem
Copy link
Collaborator Author

sbellem commented Aug 24, 2018

We're already doing a version of this in the demo3 branch but it was made in kind of a rush 6c3f710

Ah ok, cool! I had not looked at other branches!

Feel free to close or leave open, or transform the goal of the issue into incorporating 6c3f710 into dev ...

@sbellem
Copy link
Collaborator Author

sbellem commented Aug 24, 2018

From @afck on August 14, 2018 9:49

I'd maybe use max(1, floor(B / N)), just so that it also works in use cases with very small batch sizes where B / N would be 0.

@sbellem sbellem self-assigned this Aug 24, 2018
@sbellem sbellem added the enhancement New feature or request label Aug 24, 2018
@sbellem sbellem added this to the 1.0 milestone Aug 24, 2018
@sbellem
Copy link
Collaborator Author

sbellem commented Aug 30, 2018

I'd maybe use max(1, floor(B / N)), just so that it also works in use cases with very small batch sizes where B / N would be 0.

I wonder why one would choose such small batch sizes though, considering the following from the paper:

As we will see in Section 4.4, our ACS instantiation has a total communication cost of O(N^2 |v| + λ N^3 log N), where |v| bounds the size of any node’s input. We therefore choose a batch size B = Ω(λ N^2 log N) so that the contribution from each node (B/N) absorbs this additive overhead.

@afck
Copy link

afck commented Aug 31, 2018

I think the batch size in the paper is mainly to demonstrate that the claimed asymptotic throughput can be achieved in theory, but in practice it would lead to a latency that grows very quickly in N? I imagine that the protocol will sometimes be used in contexts where the batch size is constant but the network size can change.

I agree that B < N is probably a rare case, of course. But if someone uses it that way, an oversized (N instead of B) batch is maybe still better than a network that produces only empty batches. An alternative would be to just return an error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants