Skip to content

Commit

Permalink
Zw/noir recursion 2 (#414)
Browse files Browse the repository at this point in the history
* removed redundant `reduce` operations after negating biggroup elements

simplified hash input structure when hashing transcripts

cached partial non native field multiplications

reverted how native transcript computes hash buffers

pedersen_plookup can be configured to skip the hash_single range check under limited conditions

fixed the range check in pedersen_plookup::hash_single

pedersen_plookup::hash_single now validates the low and high scalar slice values match the  original scalar

bigfield::operator- now correctly uses the UltraPlonk code path if able to

added biggroup::multiple_montgomery_ladder to reduce required field multiplications

added biggroup::quadruple_and_add to reduce required field multiplications

biggroup_nafs now directly calls the Composer range constraint methods to avoid creating redundant arithmetic gates when using the PlookupComposer

biggroup plookup ROM tables now track the maximum size of any field element recovered from the table (i.e. the maximum of the input maximum sizes)

biggroup batch tables prefer to create size-6 lookup tables if doing so reduces the number of individual tables required for a given MSM

recursion::transcript no longer performs redundant range constraints when adding buffer elements
recursion::transcript correctly checks that, when slicing field elements , the slice values are correct over the integers (i.e. slice_sum != original + p)

recursion::verification_key now optimally packs key data into minimum required number of field elements before hashing

recursion::verifier proof and key data is now correctly extracted from the transcript/key instead of being generated directly as witnesses.

cleaned up code + comments

code tidy, added more comments

cleaned up how aggregation object handles public inputs

native verification_key::compress matches circuit output

fixed compile errors + failing tests

compiler error

join_split.test.cpp passing

Note: not changing any upstream .js verification keys. I don't think we need to as bberg is now decoupled from aztec connect

* compiler fix

* more compiler fix

* attempt to fix .js and .sol tests

* revert keccak transcript to original functionality

* added hash_index back into verification_key::compress

fixed composer bug where `decompose_into_default_range` was sometimes not range-constraining last limb

removed commented-out code

added more descriptive comments to PedersenPreimageBuilder

* changed join-split vkey

* temporarily point to branch of aztec that updates aggregation state usage until fix is in aztec master

* revert .aztec-packages-commit

* header brittleness fix

* compiler fix

* compiler fix w. aggregation object

* reverting changes to `assign_object_to_proof_outputs` to preserve backwards-compatibility with a3-packages

* more backwards compatibility fixes

* wip

---------

Co-authored-by: dbanks12 <david@aztecprotocol.com>
Co-authored-by: David Banks <47112877+dbanks12@users.noreply.github.com>
  • Loading branch information
3 people authored May 17, 2023
1 parent 2f49c10 commit 5425693
Show file tree
Hide file tree
Showing 42 changed files with 1,611 additions and 1,294 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,16 @@
namespace crypto {
namespace pedersen_commitment {

/**
* @brief Converts input uint8_t buffers into vector of field elements. Used to hash the Transcript in a SNARK-friendly
* manner for recursive circuits.
*
* `buffer` is an unstructured byte array we want to convert these into field elements
* prior to hashing. We do this by splitting buffer into 31-byte chunks.
*
* @param buffer
* @return std::vector<grumpkin::fq>
*/
inline std::vector<grumpkin::fq> convert_buffer_to_field(const std::vector<uint8_t>& input)
{
const size_t num_bytes = input.size();
Expand Down
8 changes: 4 additions & 4 deletions cpp/src/barretenberg/crypto/pedersen_commitment/pedersen.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -105,16 +105,16 @@ grumpkin::fq compress_native(const std::vector<std::pair<grumpkin::fq, generator
/**
* Given an arbitrary length of bytes, convert them to fields and compress the result using the default generators.
*/
grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input)
grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input, const size_t hash_index)
{
const auto elements = convert_buffer_to_field(input);
grumpkin::fq result_fq = compress_native(elements);
grumpkin::fq result_fq = compress_native(elements, hash_index);
return result_fq;
}

grumpkin::fq compress_native(const std::vector<uint8_t>& input)
grumpkin::fq compress_native(const std::vector<uint8_t>& input, const size_t hash_index)
{
return compress_native_buffer_to_field(input);
return compress_native_buffer_to_field(input, hash_index);
}

} // namespace pedersen_commitment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ template <size_t T> grumpkin::fq compress_native(const std::array<grumpkin::fq,
return commit_native(converted).x;
}

grumpkin::fq compress_native(const std::vector<uint8_t>& input);
grumpkin::fq compress_native(const std::vector<uint8_t>& input, const size_t hash_index = 0);

grumpkin::fq compress_native(const std::vector<std::pair<grumpkin::fq, generators::generator_index_t>>& input_pairs);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,12 @@ grumpkin::g1::element merkle_damgard_compress(const std::vector<grumpkin::fq>& i
const size_t num_inputs = inputs.size();

grumpkin::fq result = (pedersen_iv_table[iv]).x;
for (size_t i = 0; i < num_inputs; i++) {
result = hash_pair(result, num_inputs);
for (size_t i = 0; i < num_inputs - 1; i++) {
result = hash_pair(result, inputs[i]);
}

return (hash_single(result, false) + hash_single(grumpkin::fq(num_inputs), true));
return (hash_single(result, false) + hash_single(inputs[num_inputs - 1], true));
}

grumpkin::g1::element merkle_damgard_compress(const std::vector<grumpkin::fq>& inputs, const std::vector<size_t>& ivs)
Expand All @@ -46,16 +47,16 @@ grumpkin::g1::element merkle_damgard_compress(const std::vector<grumpkin::fq>& i
const size_t num_inputs = inputs.size();

grumpkin::fq result = (pedersen_iv_table[0]).x;
for (size_t i = 0; i < 2 * num_inputs; i++) {
result = hash_pair(result, num_inputs);
for (size_t i = 0; i < 2 * num_inputs - 1; i++) {
if ((i & 1) == 0) {
grumpkin::fq iv_result = (pedersen_iv_table[ivs[i >> 1]]).x;
result = hash_pair(result, iv_result);
} else {
result = hash_pair(result, inputs[i >> 1]);
}
}

return (hash_single(result, false) + hash_single(grumpkin::fq(num_inputs), true));
return (hash_single(result, false) + hash_single(inputs[num_inputs - 1], true));
}

grumpkin::g1::element merkle_damgard_tree_compress(const std::vector<grumpkin::fq>& inputs,
Expand Down Expand Up @@ -111,16 +112,16 @@ grumpkin::fq compress_native(const std::vector<grumpkin::fq>& inputs, const std:
return commit_native(inputs, hash_indices).x;
}

grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input)
grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input, const size_t hash_index)
{
const auto elements = convert_buffer_to_field(input);
grumpkin::fq result_fq = compress_native(elements);
grumpkin::fq result_fq = compress_native(elements, hash_index);
return result_fq;
}

std::vector<uint8_t> compress_native(const std::vector<uint8_t>& input)
std::vector<uint8_t> compress_native(const std::vector<uint8_t>& input, const size_t hash_index)
{
const auto result_fq = compress_native_buffer_to_field(input);
const auto result_fq = compress_native_buffer_to_field(input, hash_index);
uint256_t result_u256(result_fq);
const size_t num_bytes = input.size();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ grumpkin::g1::element merkle_damgard_tree_compress(const std::vector<grumpkin::f

grumpkin::fq compress_native(const std::vector<grumpkin::fq>& inputs, const size_t hash_index = 0);
grumpkin::fq compress_native(const std::vector<grumpkin::fq>& inputs, const std::vector<size_t>& hash_indices);
std::vector<uint8_t> compress_native(const std::vector<uint8_t>& input);
std::vector<uint8_t> compress_native(const std::vector<uint8_t>& input, const size_t hash_index = 0);

grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input);
grumpkin::fq compress_native_buffer_to_field(const std::vector<uint8_t>& input, const size_t hash_index = 0);

template <size_t T> grumpkin::fq compress_native(const std::array<grumpkin::fq, T>& inputs)
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,18 +157,17 @@ TEST(pedersen_lookup, merkle_damgard_compress)

const auto result = crypto::pedersen_commitment::lookup::merkle_damgard_compress(inputs, iv);

fq intermediate = (grumpkin::g1::affine_one * fr(iv + 1)).x;
auto iv_hash = compute_expected((grumpkin::g1::affine_one * fr(iv + 1)).x, 0);
auto length = compute_expected(fq(m), (crypto::pedersen_hash::lookup::NUM_PEDERSEN_TABLES / 2));
fq intermediate = affine_element(iv_hash + length).x;
for (size_t i = 0; i < m; i++) {
intermediate =
affine_element(compute_expected(intermediate, 0) +
compute_expected(inputs[i], (crypto::pedersen_hash::lookup::NUM_PEDERSEN_TABLES / 2)))
.x;
}

EXPECT_EQ(affine_element(result).x,
affine_element(compute_expected(intermediate, 0) +
compute_expected(fq(m), (crypto::pedersen_hash::lookup::NUM_PEDERSEN_TABLES / 2)))
.x);
EXPECT_EQ(affine_element(result).x, intermediate);
}

TEST(pedersen_lookup, merkle_damgard_compress_multiple_iv)
Expand All @@ -188,7 +187,11 @@ TEST(pedersen_lookup, merkle_damgard_compress_multiple_iv)
const auto result = crypto::pedersen_commitment::lookup::merkle_damgard_compress(inputs, ivs);

const size_t initial_iv = 0;
fq intermediate = (grumpkin::g1::affine_one * fr(initial_iv + 1)).x;
auto iv_hash = compute_expected((grumpkin::g1::affine_one * fr(initial_iv + 1)).x, 0);

auto length = compute_expected(fq(m), (crypto::pedersen_hash::lookup::NUM_PEDERSEN_TABLES / 2));
fq intermediate = affine_element(iv_hash + length).x;

for (size_t i = 0; i < 2 * m; i++) {
if ((i & 1) == 0) {
const auto iv = (grumpkin::g1::affine_one * fr(ivs[i >> 1] + 1)).x;
Expand All @@ -204,10 +207,7 @@ TEST(pedersen_lookup, merkle_damgard_compress_multiple_iv)
}
}

EXPECT_EQ(affine_element(result).x,
affine_element(compute_expected(intermediate, 0) +
compute_expected(fq(m), (crypto::pedersen_hash::lookup::NUM_PEDERSEN_TABLES / 2)))
.x);
EXPECT_EQ(affine_element(result).x, intermediate);
}

TEST(pedersen_lookup, merkle_damgard_tree_compress)
Expand Down
5 changes: 3 additions & 2 deletions cpp/src/barretenberg/honk/composer/ultra_honk_composer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -371,11 +371,12 @@ class UltraHonkComposer {
};
// std::array<uint32_t, 2> decompose_non_native_field_double_width_limb(
// const uint32_t limb_idx, const size_t num_limb_bits = (2 * DEFAULT_NON_NATIVE_FIELD_LIMB_BITS));
std::array<uint32_t, 2> queue_non_native_field_multiplication(
std::array<uint32_t, 2> evaluate_non_native_field_multiplication(
const UltraCircuitConstructor::non_native_field_witnesses& input,
const bool range_constrain_quotient_and_remainder = true)
{
return circuit_constructor.queue_non_native_field_multiplication(input, range_constrain_quotient_and_remainder);
return circuit_constructor.evaluate_non_native_field_multiplication(input,
range_constrain_quotient_and_remainder);
};
// std::array<uint32_t, 2> evaluate_partial_non_native_field_multiplication(const non_native_field_witnesses&
// input); typedef std::pair<uint32_t, barretenberg::fr> scaled_witness; typedef std::tuple<scaled_witness,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -649,6 +649,7 @@ TEST(UltraHonkComposer, non_native_field_multiplication)

fq a = fq::random_element();
fq b = fq::random_element();

uint256_t modulus = fq::modulus;

uint1024_t a_big = uint512_t(uint256_t(a));
Expand Down Expand Up @@ -692,7 +693,7 @@ TEST(UltraHonkComposer, non_native_field_multiplication)
proof_system::UltraCircuitConstructor::non_native_field_witnesses inputs{
a_indices, b_indices, q_indices, r_indices, modulus_limbs, fr(uint256_t(modulus)),
};
const auto [lo_1_idx, hi_1_idx] = composer.queue_non_native_field_multiplication(inputs);
const auto [lo_1_idx, hi_1_idx] = composer.evaluate_non_native_field_multiplication(inputs);
composer.range_constrain_two_limbs(lo_1_idx, hi_1_idx, 70, 70);

prove_and_verify(composer, /*expected_result=*/true);
Expand Down
3 changes: 2 additions & 1 deletion cpp/src/barretenberg/honk/proof_system/prover.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ namespace proof_system::honk {

// We won't compile this class with honk::flavor::Ultra, but we will like want to compile it (at least for testing)
// with a flavor that uses the curve Grumpkin, or a flavor that does/does not have zk, etc.
template <typename T> concept StandardFlavor = IsAnyOf<T, honk::flavor::Standard>;
template <typename T>
concept StandardFlavor = IsAnyOf<T, honk::flavor::Standard>;

template <StandardFlavor Flavor> class StandardProver_ {

Expand Down
3 changes: 2 additions & 1 deletion cpp/src/barretenberg/honk/proof_system/ultra_prover.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ namespace proof_system::honk {

// We won't compile this class with honk::flavor::Standard, but we will like want to compile it (at least for testing)
// with a flavor that uses the curve Grumpkin, or a flavor that does/does not have zk, etc.
template <typename T> concept UltraFlavor = IsAnyOf<T, honk::flavor::Ultra>;
template <typename T>
concept UltraFlavor = IsAnyOf<T, honk::flavor::Ultra>;
template <UltraFlavor Flavor> class UltraProver_ {

using FF = typename Flavor::FF;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -806,11 +806,12 @@ TEST_F(join_split_tests, test_0_input_notes_and_detect_circuit_change)

// The below part detects any changes in the join-split circuit

constexpr uint32_t CIRCUIT_GATE_COUNT = 185573;
constexpr uint32_t CIRCUIT_GATE_COUNT = 183834;
constexpr uint32_t GATES_NEXT_POWER_OF_TWO = 524288;
const uint256_t VK_HASH("13eb88883e80efb9bf306af2962cd1a49e9fa1b0bfb2d4b563b95217a17bcc74");
const uint256_t VK_HASH("5c2e0fe914dbbf23d6bac6ae4db9a7e43d98c0b9d71c9200208dbce24a815c6e");

auto number_of_gates_js = result.number_of_gates;
std::cout << get_verification_key()->sha256_hash() << std::endl;
auto vk_hash_js = get_verification_key()->sha256_hash();

if (!CIRCUIT_CHANGE_EXPECTED) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -380,13 +380,14 @@ class UltraPlonkComposer {
};
// std::array<uint32_t, 2> decompose_non_native_field_double_width_limb(
// const uint32_t limb_idx, const size_t num_limb_bits = (2 * DEFAULT_NON_NATIVE_FIELD_LIMB_BITS));
std::array<uint32_t, 2> queue_non_native_field_multiplication(
std::array<uint32_t, 2> evaluate_non_native_field_multiplication(
const UltraCircuitConstructor::non_native_field_witnesses& input,
const bool range_constrain_quotient_and_remainder = true)
{
return circuit_constructor.queue_non_native_field_multiplication(input, range_constrain_quotient_and_remainder);
return circuit_constructor.evaluate_non_native_field_multiplication(input,
range_constrain_quotient_and_remainder);
};
// std::array<uint32_t, 2> evaluate_partial_non_native_field_multiplication(const non_native_field_witnesses&
// std::array<uint32_t, 2> queue_partial_non_native_field_multiplication(const non_native_field_witnesses&
// input); typedef std::pair<uint32_t, barretenberg::fr> scaled_witness; typedef std::tuple<scaled_witness,
// scaled_witness, barretenberg::fr> add_simple; std::array<uint32_t, 5> evaluate_non_native_field_subtraction(
// add_simple limb0,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -781,7 +781,7 @@ TEST(ultra_plonk_composer_splitting_tmp, non_native_field_multiplication)
UltraCircuitConstructor::non_native_field_witnesses inputs{
a_indices, b_indices, q_indices, r_indices, modulus_limbs, fr(uint256_t(modulus)),
};
const auto [lo_1_idx, hi_1_idx] = composer.queue_non_native_field_multiplication(inputs);
const auto [lo_1_idx, hi_1_idx] = composer.evaluate_non_native_field_multiplication(inputs);
composer.range_constrain_two_limbs(lo_1_idx, hi_1_idx, 70, 70);

auto prover = composer.create_prover();
Expand Down
Loading

0 comments on commit 5425693

Please sign in to comment.