-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of Proof::Update #17
Implementation of Proof::Update #17
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I feel like this PR needs more work because of the cached_hashes not being updated.
If you want, you could separate the functions:
start_position_at_row
maybe_remap
calc_next_positions
since those look ok to be merged.
9830cd8
to
ba88aa1
Compare
Force-pushed 893cac1, addressing requests. Now we return an updated |
ba88aa1
to
893cac1
Compare
893cac1
to
9de24c9
Compare
9de24c9
to
5035505
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked at the new changes in detail today.
5035505
to
2e44174
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to have comments above each block of code that explains what the code is doing for update_proof_add
and update_proof_remove
. It'd be helpful for documentation/reading the code.
src/accumulator/proof.rs
Outdated
// Grab all the positions of the needed proof hashes. | ||
let needed_pos = util::get_proof_positions(&targets, num_leaves, total_rows); | ||
|
||
let updated: Vec<_> = updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be creating a extra_positions
variable instead of shadowing updated
since we'll need it for adding new proofs.
Take for example this tree:
14
|---------------\
12 13
|-------\ |-------\
08 09 10 11
|---\ |---\ |---\ |---\
00 01 02 03 04 05 06 07
If we cache [00, 07]
, the hashes of 12
and 13
are left out as they're able to be calculated from hashing either (00 | 01) | 09
or 10 | (06 | 07)
. But if we delete 0
, then 12
isn't able to be calculated.
updated
provides the new hash for 12
after the deletion of 0
and we use this data to add in the missing 12
. But the code here filters out 12
, so this ends up creating an invalid proof.
Here's the scenario I described above as a test case:
{
"update": {
"adds": [],
"proof": {
"targets": [0],
"hashes": [
"4bf5122f344554c53bde2ebb8cd2b7e3d1600ad631c385a5d7cce23c7785459a",
"9576f4ade6e9bc3a6458b506ce3e4e890df29cb14cb5d3d887672aef55647a2b",
"29590a14c1b09384b94a2c0e94bf821ca75b62eacebc47893397ca88e3bbcbd7"
]
},
"del_hashes": [
"6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"
]
},
"remembers": [],
"cached_proof": {
"targets": [0, 7],
"hashes": [
"4bf5122f344554c53bde2ebb8cd2b7e3d1600ad631c385a5d7cce23c7785459a",
"67586e98fad27da0b9968bc039a1ef34c939b9b8e523a8bef89d478608c5ecf6",
"9576f4ade6e9bc3a6458b506ce3e4e890df29cb14cb5d3d887672aef55647a2b",
"9eec588c41d87b16b0ee226cb38da3864f9537632321d8be855a73d5616dcc73"
]
},
"initial_stump": "Add 8 leaves [0, 1, 2, 3, 4, 5, 6, 7]",
"initial_roots": [
"b151a956139bb821d4effa34ea95c17560e0135d1e4661fc23cedc3af49dac42"
],
"initial_leaves": 8,
"cached_hashes": [
"6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d",
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879"
],
"expected_roots": [
"726fdd3b432cc59e68487d126e70f0db74a236267f8daeae30b31839a4e7ebed"
],
"expected_targets": [7],
"expected_cached_hashes": [
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879"
]
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed how we deal with old proof, now we use updated
without filtering, but only asks for positions that are required for the new proof. Because using updated
as passed in would create nodes that are no longer needed. In this case, 01
and 09
. The filtered version is not used anymore.
2e44174
to
ede17fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I was coming up with the test case to make not sorting targets fail, I also came across another case that should be passing but is failing in the current code. I've not looked at why this fails but it should pass.
{
"update": {
"adds": [],
"proof": {
"targets": [3, 6],
"hashes": [
"dbc1b4c900ffe48d575b5da5c638040125f65db0fe3e24494b76ea986457d986",
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879",
"6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d",
"e77b9a9ae9e30b0dbdb6f510a264ef9de781501d7b6b92ae89eb059c5ab743db"
]
},
"del_hashes": [
"084fed08b978af4d7d196a7446a86b58009e636b611db16211b65a9aadff29c5",
"67586e98fad27da0b9968bc039a1ef34c939b9b8e523a8bef89d478608c5ecf6"
]
},
"remembers": [],
"cached_proof": {
"targets": [2, 3, 7, 16, 18, 26],
"hashes": [
"67586e98fad27da0b9968bc039a1ef34c939b9b8e523a8bef89d478608c5ecf6"
]
},
"initial_stump": "Add 12 leaves [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], then delete [1, 4, 8, 9, 10]",
"initial_roots": [
"5093a5ae2bceeef66d9af3f4f1ed71acffe4932cd02afca5d2dc659dca5c418f",
"e7cf46a078fed4fafd0b5e3aff144802b853f8ae459a4f0c14add3314b7cc3a6"
],
"initial_leaves": 12,
"cached_hashes": [
"dbc1b4c900ffe48d575b5da5c638040125f65db0fe3e24494b76ea986457d986",
"084fed08b978af4d7d196a7446a86b58009e636b611db16211b65a9aadff29c5",
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879",
"6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d",
"e77b9a9ae9e30b0dbdb6f510a264ef9de781501d7b6b92ae89eb059c5ab743db",
"e7cf46a078fed4fafd0b5e3aff144802b853f8ae459a4f0c14add3314b7cc3a6"
],
"expected_roots": [
"2bfc2e9199263e7b2ef762afeb2fede84eb4fac37f3f28fd22d5761c8390b875",
"e7cf46a078fed4fafd0b5e3aff144802b853f8ae459a4f0c14add3314b7cc3a6"
],
"expected_targets": [16, 17, 18, 19, 26],
"expected_cached_hashes": [
"6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d",
"dbc1b4c900ffe48d575b5da5c638040125f65db0fe3e24494b76ea986457d986",
"e77b9a9ae9e30b0dbdb6f510a264ef9de781501d7b6b92ae89eb059c5ab743db",
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879",
"e7cf46a078fed4fafd0b5e3aff144802b853f8ae459a4f0c14add3314b7cc3a6"
]
}
Here's the go code I used to generate the above test case: https://go.dev/play/p/dSFevPamlW3
ede17fe
to
09a4d3b
Compare
src/accumulator/proof.rs
Outdated
(new_proof_pos, _) = | ||
Proof::calc_next_positions(&vec![node], &proof_with_pos, num_leaves, true)?; | ||
} | ||
// Add in proof_hashes to the new_nodes as some needed hashes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where are the proof_hashes being added in? I don't see it in the code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean in the new proof?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean the old proof hashes (that we've updated with new positions).
In the go impl I create a completely new proof object so you need to also grab the old proof hashes as well. Looks like you're doing that part in a different way.
The comment isn't really describing what the code is doing so it should be removed.
src/accumulator/proof.rs
Outdated
} | ||
|
||
final_targets.sort(); | ||
// Move up positions that need to be moved up due to the empty roots |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't where the positions move up though. Doesn't look like the comment is describing what the code is doing
src/accumulator/proof.rs
Outdated
let mut new_proof: Vec<_> = new_proof_pos.into_iter().zip(self.hashes).collect(); | ||
for pos in needed_proof_positions { | ||
for (new_node_pos, hash) in new_nodes.iter() { | ||
if *new_node_pos == pos { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be an error check here for when we don't find the needed position.
The vec of nodes should always contain all the needed positions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I add an else clause with an error, it breaks. I am not sure why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you post the code you used and the error that occurs when you made the change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. This is the for
loop in line 350.
for pos in needed_proof_positions {
for (new_node_pos, hash) in new_nodes.iter() {
if *new_node_pos == pos {
new_proof.push((pos, *hash));
} else {
return Err(format!("Missing position {pos}"));
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok so this is related to https://github.com/mit-dci/rustreexo/pull/17/files#r1029246624
For positions: [0, 1, 7, 10]
, you need [6, 11, 17, 18, 20, 27]
. In that loop, it'll try to grab every single one of the newly created positions starting from 6
to 27
.
However, 6
isn't part of the newly created positions. It's part of the old proofs that existed before this block.
In the go version, I'm creating a whole new proof so what I did was I added the old proofs to the new_nodes
. In yours, the new_proof
already contains 6
, along with 17
and 18
.
So I guess you need to filter out what you already have from the needed_proof_positions
.
needed_proof_positions
= [6, 11, 17, 18, 20, 27]
but since you already have [6, 17, 18]
in new_proof
, needed_proof_positions
can just be [11, 20, 27]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that I just copy the old proof to the new one. This doesn't seem right because the proof elements moved, and I have to take their new position. It won't create an error in the updated proof, since we don't rely on the positions (except for sorting, I think). I solved it by remapping the old elements as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I see that you're handling it a differently than I how did it. What you did works too.
src/accumulator/proof.rs
Outdated
leaves_after_add, | ||
util::tree_rows(leaves_after_add), | ||
); | ||
let mut new_proof: Vec<_> = new_proof_pos.into_iter().zip(self.hashes).collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new_proof_pos
is guaranteed to be sorted. Some of the positions may move around so it's not guaranteed to be in the same order as self.hashes
.
In order for the hashes to be in the same order as the positions, we need to update the ordering of the hashes when we shuffle the positions around as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'll be in the same position because calc_next_positions
doesn't move targets, the sorting happens below with both hashes and targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's another bug then. calc_next_positions
should result in positions being moved around.
I made this test case for calc_next_positions
. It's currently failing on the latest commit.
#[test]
fn test_calc_next_positions() {
use super::Proof;
#[derive(Clone)]
struct Test {
name: &'static str,
block_targets: Vec<u64>,
old_positions: Vec<(u64, sha256::Hash)>,
num_leaves: u64,
num_adds: u64,
append_roots: bool,
expected: (Vec<u64>, Vec<sha256::Hash>),
}
let tests = vec![Test {
name: "One empty root deleted",
block_targets: vec![26],
old_positions: vec![
(
1,
bitcoin_hashes::sha256::Hash::from_str(
"4bf5122f344554c53bde2ebb8cd2b7e3d1600ad631c385a5d7cce23c7785459a",
)
.unwrap(),
),
(
13,
bitcoin_hashes::sha256::Hash::from_str(
"9d1e0e2d9459d06523ad13e28a4093c2316baafe7aec5b25f30eba2e113599c4",
)
.unwrap(),
),
(
17,
bitcoin_hashes::sha256::Hash::from_str(
"9576f4ade6e9bc3a6458b506ce3e4e890df29cb14cb5d3d887672aef55647a2b",
)
.unwrap(),
),
(
25,
bitcoin_hashes::sha256::Hash::from_str(
"29590a14c1b09384b94a2c0e94bf821ca75b62eacebc47893397ca88e3bbcbd7",
)
.unwrap(),
),
],
num_leaves: 14,
num_adds: 2,
append_roots: false,
expected: (
vec![1, 17, 21, 25],
vec![
bitcoin_hashes::sha256::Hash::from_str(
"4bf5122f344554c53bde2ebb8cd2b7e3d1600ad631c385a5d7cce23c7785459a",
)
.unwrap(),
bitcoin_hashes::sha256::Hash::from_str(
"9576f4ade6e9bc3a6458b506ce3e4e890df29cb14cb5d3d887672aef55647a2b",
)
.unwrap(),
bitcoin_hashes::sha256::Hash::from_str(
"9d1e0e2d9459d06523ad13e28a4093c2316baafe7aec5b25f30eba2e113599c4",
)
.unwrap(),
bitcoin_hashes::sha256::Hash::from_str(
"29590a14c1b09384b94a2c0e94bf821ca75b62eacebc47893397ca88e3bbcbd7",
)
.unwrap(),
],
),
}];
for test in tests {
let res = Proof::calc_next_positions(
&test.block_targets,
&test.old_positions,
test.num_leaves + test.num_adds,
test.append_roots,
)
.unwrap();
assert_eq!(res, test.expected, "testcase: \"{}\" fail", test.name);
}
}
Here's the same test case in go. You have to run it locally though.
func strToHash(str string) Hash {
hash, err := hex.DecodeString(str)
if err != nil {
panic(err)
}
return *(*[32]byte)(hash)
}
func Test12(t *testing.T) {
blockTargets := []uint64{26}
hashes := []Hash{
strToHash("4bf5122f344554c53bde2ebb8cd2b7e3d1600ad631c385a5d7cce23c7785459a"),
strToHash("9d1e0e2d9459d06523ad13e28a4093c2316baafe7aec5b25f30eba2e113599c4"),
strToHash("9576f4ade6e9bc3a6458b506ce3e4e890df29cb14cb5d3d887672aef55647a2b"),
strToHash("29590a14c1b09384b94a2c0e94bf821ca75b62eacebc47893397ca88e3bbcbd7"),
}
proofAndPos := hashAndPos{[]uint64{1, 13, 17, 25}, hashes}
numLeaves := uint64(14)
fmt.Println("old proofAndPos:\n", proofAndPos.String())
proofAndPos = getNewPositions(blockTargets, proofAndPos, numLeaves+uint64(2), false)
fmt.Println("new proofAndPos:\n", proofAndPos.String())
}
Here's a go playground link for how I generated the example case: https://go.dev/play/p/Bgcuymo-ugd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's another bug then. calc_next_positions should result in positions being moved around.
Ok, it does result in positions being moved around, but it doesn't return then sorted. I made this way because in some places I don't need hashes, only positions. To avoid unzips, calc_next_positions
returns a tuple of Vec<u64>
and Vec<Hash>
. If I have to sort, then I need to hold a Vec<(u64, Hash)>
, since I can't sort hashes without positions.
The code is handling this by sorting if needed. Do you think it's worth changing calc_next_positions
to produce a sorted Vec<(u64, Hash)>
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's worth changing
calc_next_positions
to produce a sortedVec<(u64, Hash)>
?
When I redid the algorithm for how the proof updates were done, I tried to avoid sorting wherever possible but I couldn't think of how to avoid sorting during position updates. So we can't avoid sorting (for now, maybe there is an algorithm that you can do without sorting).
You could internally use Vec<(u64, Hash)>
and just unzip before returning. If unzip is too expensive when benchmarked, we can look at ways of reducing the cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, 826f047 sorts inside calc_next_position
, I added your test, and it's passing.
src/accumulator/proof.rs
Outdated
let res = cached_proof.update_proof_add( | ||
additional_hashes, | ||
cached_target_hashes.clone(), | ||
vec![10], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you wanted to cached the hash of 10
so it should be vec![0]
09a4d3b
to
4635eee
Compare
Force-push 4635eee: |
4635eee
to
826f047
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update_proof_remove
looks good (at least I can't find any errors). update_proof_add
needs a bit more touching up.
src/accumulator/proof.rs
Outdated
let mut new_proof: Vec<_> = new_proof_pos.into_iter().zip(self.hashes).collect(); | ||
for pos in needed_proof_positions { | ||
for (new_node_pos, hash) in new_nodes.iter() { | ||
if *new_node_pos == pos { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I see that you're handling it a differently than I how did it. What you did works too.
let proof_with_pos = | ||
Proof::maybe_remap(before_num_leaves, adds.len() as u64, proof_with_pos); | ||
// remembers is an index telling what newly created UTXO should be cached | ||
for remember in remembers { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding in remembers to the targets before you move the target positions up due to the toDestroy
can result in errors where the remembers also get moved up, resulting in the wrong position for the remembers.
I made a test case to catch this:
[
{
"update": {
"adds": [6, 7, 8],
"proof": {
"targets": [4, 5],
"hashes": []
},
"del_hashes": [
"040000ff00000000000000000000000000000000000000000000000000000000",
"050000ff00000000000000000000000000000000000000000000000000000000"
]
},
"remembers": [1],
"cached_proof": {
"targets": [3],
"hashes": [
"020000ff00000000000000000000000000000000000000000000000000000000",
"daaf42b3128dcdaa8056c38fb5563495c31a177988287f61bb15c572c9d1a726"
]
},
"initial_stump": "Add 6 leaves [0, 1, 2, 3, 4, 5], then delete [4, 5] and add [6, 7, 8]",
"initial_roots": [
"1083c74aeaa42c9967ec1e13ef5e71806cbc977edfd1a260aab95eb0e15872e7",
"dfcb6045f5d4851dcf25f7bc2f8bc0ec974ccd01d6c2d753f9b777cfcfced559"
],
"initial_leaves": 6,
"cached_hashes": [
"030000ff00000000000000000000000000000000000000000000000000000000"
],
"expected_roots": [
"5b6a19c3e8d41baa1694ed0237414f13e16b355b95a894c019b4384ad79acfdd",
"080000ff00000000000000000000000000000000000000000000000000000000"
],
"expected_targets": [3, 19],
"expected_cached_hashes": [
"030000ff00000000000000000000000000000000000000000000000000000000",
"070000ff00000000000000000000000000000000000000000000000000000000"
]
}
Here's a replicated test in Go that you can just copy&paste to your local repo.
func strToHash(str string) Hash {
hash, err := hex.DecodeString(str)
if err != nil {
panic(err)
}
return *(*[32]byte)(hash)
}
func Test2(t *testing.T) {
adds := []Hash{
strToHash("060000ff00000000000000000000000000000000000000000000000000000000"),
strToHash("070000ff00000000000000000000000000000000000000000000000000000000"),
strToHash("080000ff00000000000000000000000000000000000000000000000000000000"),
}
blockProof := Proof{Targets: []uint64{4, 5}}
delHashes := []Hash{
strToHash("040000ff00000000000000000000000000000000000000000000000000000000"),
strToHash("050000ff00000000000000000000000000000000000000000000000000000000"),
}
remembers := []uint32{1}
cachedProof := Proof{
Targets: []uint64{3},
Proof: []Hash{
strToHash("020000ff00000000000000000000000000000000000000000000000000000000"),
strToHash("daaf42b3128dcdaa8056c38fb5563495c31a177988287f61bb15c572c9d1a726"),
},
}
s := Stump{
Roots: []Hash{
strToHash("1083c74aeaa42c9967ec1e13ef5e71806cbc977edfd1a260aab95eb0e15872e7"),
strToHash("dfcb6045f5d4851dcf25f7bc2f8bc0ec974ccd01d6c2d753f9b777cfcfced559"),
},
NumLeaves: 6,
}
cachedHashes := []Hash{
strToHash("030000ff00000000000000000000000000000000000000000000000000000000"),
}
expectedRoots := []Hash{
strToHash("5b6a19c3e8d41baa1694ed0237414f13e16b355b95a894c019b4384ad79acfdd"),
strToHash("080000ff00000000000000000000000000000000000000000000000000000000"),
}
updateData, err := s.Update(delHashes, adds, blockProof)
if err != nil {
t.Fatal(err)
}
// Sanity checking
if !reflect.DeepEqual(s.Roots, expectedRoots) {
t.Fatal(fmt.Errorf("didn't get expected roots"))
}
cachedHashes, err = cachedProof.Update(cachedHashes, adds, blockProof.Targets, remembers, updateData)
if err != nil {
t.Fatal(err)
}
// Sanity checking
_, err = Verify(s, cachedHashes, cachedProof)
if err != nil {
t.Fatal(err)
}
fmt.Println("cachedHashes:\n", printHashes(cachedHashes))
fmt.Println("cachedProof:\n", cachedProof.String())
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those hashes look a little weird, Add 6 leaves [0, 1, 2, 3, 4, 5], then delete [4, 5] and add [6, 7, 8]
and delHashes
is
040000ff00000000000000000000000000000000000000000000000000000000
050000ff00000000000000000000000000000000000000000000000000000000
ff000004 and ff000005 in little-endian, and 31 bytes with 0's. I'm almost certain this is not their hashes.
If you meant to use this hash as leaf hash anyway without knowing the preimage, this won't work with this testing framework we made because it hashes the preimages (usually numbers) and use this as leaf hash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah the 0xff's are shoved in there in the go tests because go library interprets alls 0s as nothing. So instead of hashing it, the test just picks a byte and turns it into 0xff.
I'll change the test so it's better suited for this lib. But the:
040000ff00000000000000000000000000000000000000000000000000000000
050000ff00000000000000000000000000000000000000000000000000000000
is supposed to represent 4
and 5
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add this into the tests? This is the same as above but it's hashing the ints.
{
"update": {
"adds": [6, 7, 8],
"proof": {
"targets": [4, 5],
"hashes": []
},
"del_hashes": [
"e52d9c508c502347344d8c07ad91cbd6068afc75ff6292f062a09ca381c89e71",
"e77b9a9ae9e30b0dbdb6f510a264ef9de781501d7b6b92ae89eb059c5ab743db"
]
},
"remembers": [1],
"cached_proof": {
"targets": [3],
"hashes": [
"dbc1b4c900ffe48d575b5da5c638040125f65db0fe3e24494b76ea986457d986",
"02242b37d8e851f1e86f46790298c7097df06893d6226b7c1453c213e91717de"
]
},
"initial_stump": "Add 6 leaves [0, 1, 2, 3, 4, 5], then delete [4, 5] and add [6, 7, 8]",
"initial_roots": [
"df46b17be5f66f0750a4b3efa26d4679db170a72d41eb56c3e4ff75a58c65386",
"9eec588c41d87b16b0ee226cb38da3864f9537632321d8be855a73d5616dcc73"
],
"initial_leaves": 6,
"cached_hashes": [
"084fed08b978af4d7d196a7446a86b58009e636b611db16211b65a9aadff29c5"
],
"expected_roots": [
"0f1ff6dc2fc7cf8e27236d51de9af7d32adf10a94eef498e167d5e7cf98b0112",
"beead77994cf573341ec17b58bbf7eb34d2711c993c1d976b128b3188dc1829a"
],
"expected_targets": [3, 19],
"expected_cached_hashes": [
"084fed08b978af4d7d196a7446a86b58009e636b611db16211b65a9aadff29c5",
"ca358758f6d27e6cf45272937977a748fd88391db679ceda7dc7bf1f005ee879"
]
}
Hey totally forgot about this PR. Looks like I left some comments about update_proof_add but saw update_proof_delete was ok. I could re-review this but do you wanna refactor out the update_proof_delete first into a different PR and get that merged? |
I was about to ask you about this 😆
Yes, I can separate it into two prs. |
Could you rebase this onto main? I'll take a look at the add then. |
2543bb8
to
8508298
Compare
Rebased. IIRC the last thing missing to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Things look good. Just these nits and the test case I mentioned #17 (comment) being addressed should make this good to merge.
.zip(self.hashes.clone()) | ||
.collect(); | ||
|
||
// Remap the positions if we moved up a after the addition row. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you change the comment to:
Remap the positions if we moved up a row after the addition.
Just making things grammatically correct
Proof::calc_next_positions(&vec![node], &proof_with_pos, num_leaves, true)?; | ||
} | ||
|
||
// remembers is an index telling what newly created UTXO should be cached |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you change UTXO
to UTXOs
This method is useful for cached proofs, it can take a valid proof and update it with all of a block UTXOs, yielding a new (also valid) proof. It replaces all intermediate values in a proof with the new ones, also adds new positions that where't in the original one. Finally, calculates the new positions for each target.
8508298
to
d7e3b1b
Compare
Hmm ok I could address the nits in a later PR. I'll merge it so it's not just sitting around forever. |
I forgot to push that 🤦♂️ ? Sorry about that, I changed it locally and didn't push :') |
This PR finishes the required steps for #12. Now it's possible to update a proof independently given a set of addition and deletions from a block.