Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow duplicate output commitments #3271

Closed
antiochp opened this issue Mar 17, 2020 · 14 comments
Closed

Allow duplicate output commitments #3271

antiochp opened this issue Mar 17, 2020 · 14 comments
Labels
consensus breaking Use for issues or PRs that will break consensus and force a hard fork

Comments

@antiochp
Copy link
Member

We currently have a consensus rule that disallows duplicate output commitments in the UTXO set.
We allow output commitments to be reused over time, but duplicates cannot exist concurrently.

We would like to consider relaxing this rule. This would be a consensus breaking change.

Rationale

One transaction can effectively invalidate another transaction if they both share an output commitment. This is not an issue for regular transactions but is an issue for multi-party transactions, for example closing an Elder (Lightning style) payment channel.
In this scenario a multiparty output is spent to two individual outputs, one for each channel participant. Each participant is then able to prevent the overall transaction by introducing a conflicting duplicate output, preventing the channel funds being distributed back to both parties.

There are workarounds to this, specifically introducing two independent close transactions to the channel, preventing either from impacting the other. But this comes at a cost of increased complexity and increased channel communication cost (more transactions to build).

The ideal solution here is to allow duplicate output commitments.

Proposal

Introduce consensus rules governing precedence of precisely which instance of a set of duplicates will be spent first.
It would appear reasonable to spend oldest first, given multiple outputs to choose from.
We would still want to cut-through within a transaction or within a block before applying this new rule.

  • cut-through still applies - duplicate output within transaction or block is "spent" first
  • spend oldest first
  • account for coinbase maturity - plain output may take precedence over older yet still immature coinbase

We currently maintain an output_pos index in the database mapping output commitment to unique MMR pos, along with the associated block height.

This index would be reworked to map output commitments to a list of MMR positions.
In the common case this list will only contain a single MMR pos.
This list will behave as a stack

In cases where duplicate commitments exist we need to handle the following situations -

  • add new duplicate output instance to index
  • rewind (remove) recent duplicate instance
  • spend oldest duplicate instance
  • spend something other than oldest duplicate instance (plain vs immature coinbase)

We also need to take care to ensure this works in an adversarial environment. We may only normally see a few duplicate outputs and for these only a few duplicate instances. But we need to robustly handle a scenario where we have large numbers of duplicates, themselves with potentially large numbers of instances.
Simply storing duplicates naively as serialized vecs in the db will not be sufficient.

Proposal is to maintain a "doubly-linked list" in the database for each output commitment.
In the simple case the "head" and "tail" will simply be the single instance.
In the duplicate case the "head" will point to the most recent instance, with the "tail" pointing to the oldest instance. Each instance will have references to the "next" and "prev" instance, ordered by age (MMR insertion order).

Not exhaustive but a rough idea of what is involved to maintain this -

  • Adding a new output instance will involve inserting a new entry in the linked list, updating the head reference and the prev reference of the previous head.
  • Removing (rewinding) will involve deleting an entry and updating the head reference.
  • Spending the oldest instance will involve deleting an entry and updating the tail reference.

This should easily handle adversarial numbers of outputs as we only ever operate on the ends of the list structure.

Rollout

I propose we target 4.0.0 with the consensus breaking changes.
We can implement the index and necessary data structures in advance of this - supporting a flexible index while still preventing duplicates at the consensus level.
We can then make the necessary consensus rule changes as required.
We would need to take care to ensure the current consensus rules are unchanged until then, rejecting transactions and blocks as necessary if duplicates outputs commitments are present.

@antiochp antiochp added the consensus breaking Use for issues or PRs that will break consensus and force a hard fork label Mar 17, 2020
@tromp
Copy link
Contributor

tromp commented Apr 15, 2020

Implementation-wise it might be more efficient to have two separate indices from output commitment to output mmr index: the current one for unique outputs (UO) and a separate one for duplicate outputs (DO). DO requires more space per entry, but is expected to be much smaller in size. The downside is some more case analysis is required when adding or spending outputs.

@antiochp
Copy link
Member Author

I'd like to revisit the assumption that we want to -

  • spend oldest first

With cut-through in a block (or aggregated transaction) we implicitly spend newest first.
So we may want to use this consistently, always spending newest first, effectively treating unspent outputs as a "stack" popping the latest one off to spend it.

This would maximize the age of unspent outputs (oldest would be spent last) in the UTXO set in the presence of duplicates.

@antiochp
Copy link
Member Author

antiochp commented Jun 27, 2020

If I have multiple duplicate instances of an output in the current UTXO set it is not necessarily safe to only spend some of them. The remaining instances would be at risk of transaction "replay" in the simple case.

So presumably we would want the ability to spend multiple or all duplicate instances in a single transaction.
To do this we would need to handle the case of duplicate inputs in a transaction or block.

@antiochp
Copy link
Member Author

antiochp commented Jun 27, 2020

If we support duplicate outputs then we can have the following -

  • coinbase output, commitment C
  • plain output, commitment C

These are duplicate in the sense they have the same commitment, but different as the features differ.

When we spend an output (via a transaction input) we actually specify both the commitment and the features.

i.e. spending "coinbase, commitment C" is different to spending "plain, commitment C".

This has always actually struck me a bit weird.

Its another aspect that we need to think through in the context of duplicate outputs.


We also need to consider how the precedence rules and different output features will interact.
If a coinbase output and a regular output are created with duplicate commitments then which gets spent first.
What happens if the coinbase output is not yet matured, but would otherwise take precedence?

There are some potentially nasty edge cases to consider here.
We do not want to iterate sequentially over 1,000 immature coinbase outputs (unlikely but possible) to identify the spendable plain output. The index here should handle this efficiently.

@tromp
Copy link
Contributor

tromp commented Jun 27, 2020

When we spend an output (via a transaction input) we actually specify both the commitment and the features.

That's odd. I never realized that Input specifies features.
Where is the code that verifies if a transaction is valid? E.g. as verified before relay?

@antiochp
Copy link
Member Author

That's odd. I never realized that Input specifies features.

pub struct Input {
/// The features of the output being spent.
/// We will check maturity for coinbase output.
pub features: OutputFeatures,
/// The commit referencing the output being spent.
#[serde(
serialize_with = "secp_ser::as_hex",
deserialize_with = "secp_ser::commitment_from_hex"
)]
pub commit: Commitment,
}

I think this decision was originally made to allow us to hash the input and check it matches the hash in the output MMR (and you need the features to do this) so as to avoid actually looking the output itself up from the MMR. But this is a weak argument for doing this.

Where is the code that verifies if a transaction is valid? E.g. as verified before relay?

Basically this all lives in the txpool impl.
If we successfully add the transaction to the txpool then its valid and we can relay it.

@tromp
Copy link
Contributor

tromp commented Jun 28, 2020

I had also assumed that if we allow duplicates, then the spending order should be Last-In First-Out,
implemented with a map from each duplicate output mmr position to the next-lowest duplicate position, similar to what I proposed for recent NRD kernels.
That map would be very sparse in practice.

@antiochp
Copy link
Member Author

antiochp commented Jul 7, 2020

I believe this will be a consensus rule that will not be verifiable prior to the fast sync horizon.
If we have two duplicate outputs A and A' and one is spent prior to the horizon then fast sync nodes will be unable to verify that the last-in-first-out precedence rule was followed.

This would appear to be similar to the coinbase maturity rule - historical validators are unable to validate these rules were followed for data pre-horizon.

@tromp
Copy link
Contributor

tromp commented Jul 7, 2020

We must remind ourselves of the unusual consensus model of Grin.
A valid history is a sequence of headers and a UTXO set at the horizon (whose kernels are consistent with all header kernel counts and kernel MMR roots) followed by a week of valid blocks.
There may not have been any pre-horizon blocks and no historical validators.
In case there were, they indeed checked more things, including rangeproofs of spent outputs, duplicate spending order (with dupes allowed), existence of dupes (when not allowed), coinbase maturity, block weight limits, and possibly others. But given that there may not have been blocks in the first place, none of this should discourage one from making consensus-breaking changes.

@antiochp
Copy link
Member Author

antiochp commented Jul 7, 2020

But given that there may not have been blocks in the first place, none of this should discourage one from making consensus-breaking changes.

👍

@antiochp
Copy link
Member Author

Taking a step back and flipping this around - are there are benefits to maintaining the current restriction where we prevent duplicate unspent outputs?

One tx can prevent another tx from being accepted if they share a common output - are there scenarios where this is beneficial?

@tromp
Copy link
Contributor

tromp commented Jul 16, 2020

are there scenarios where this is beneficial?

Certainly not in the case where the output in question is 1-of-1, since its owner can create and spend that output at will.

So it would be some n-of-n, and the n parties pre-constructed two or more txs all spending to this same output. I still don't see any possible use of this...

@antiochp
Copy link
Member Author

Linking these here for reference -

https://gist.github.com/phyro/7d054c5431376c0fdaafc88a1d0e023a

https://forum.grin.mw/t/payjoins-for-replay-protection/7544/8

"Payjoins for replay protection" relies on "anchor" outputs and assumes "duplicate outputs" are not allowed.

@antiochp
Copy link
Member Author

antiochp commented Oct 8, 2020

Consensus here is we want to retain current behavior and disallow duplicate outputs.
Closing this issue.

@antiochp antiochp closed this as completed Oct 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
consensus breaking Use for issues or PRs that will break consensus and force a hard fork
Projects
None yet
Development

No branches or pull requests

2 participants