Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIP-0118? | Validation Zones #862

Open
wants to merge 35 commits into
base: master
Choose a base branch
from

Conversation

polinavino
Copy link

@polinavino polinavino commented Jul 23, 2024

We propose a set of changes that revolve around validation zones, a construct for allowing certain kinds of underspecified transactions. In particular, for the Babel-fees usecase we discuss here, we allow transactions that specify part of a swap request. A validation zone is a list of transactions such that earlier transactions in the list may be underspecified, while later transactions must complete all partial specifications. In the Babel-fees usecase, the completion of a specification is the fulfillment of a swap request. We discuss how validation zones for the Babel fees usecase can be generalized to a template for addressing a number of use cases from CPS-15.


📄 Rendered Proposal

@polinavino polinavino changed the title Validation Zones CIP-0118 | Validation Zones Jul 23, 2024
@rphair rphair changed the title CIP-0118 | Validation Zones CIP-0118? | Validation Zones Jul 23, 2024
@rphair
Copy link
Collaborator

rphair commented Jul 23, 2024

thanks @polinavino ... really happy to see the continuation of this work. I'm marking the title with the obligatory ? because until merged the number (or its assignment at all) cannot be certain. Also I'm marking the prior version Likely Deprecated and will close as such with your confirmation:

cc (for continuing review from the old proposal) @Quantumplation @fallen-icarus - p.s. cc (re: Rationale ["towards better design"]) @AndrewWestberg

Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is proper for this to be a separate PR from the first one, though that should be confirmed by other CIP editors today (https://hackmd.io/@cip-editors/93 - cc @Ryun1 @Crypto2099). Given the significance of the revision, the commit history in this case I think is more important than the discussion history & hopefully any prior discussion points will be summarised by previous reviewers here (@fallen-icarus @Quantumplation).

CIP-0118/README.md Show resolved Hide resolved
CIP-0118/README.md Outdated Show resolved Hide resolved

## Path to Active

### Software Readiness Level
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency with other CIPs (mainly for review in parallel with 100+ others) this section needs to be broken into Acceptance and Implementation ... I guess since it refers to testing functionality then it would be on the Implementation path.

When done sifting material around in this section it will also help for these items to be GitHub formatted tickboxes (- [ ]) but I am not going to be pedantic about it especially at this stage.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, will do that later today!

@rphair rphair added the Category: Ledger Proposals belonging to the 'Ledger' category. label Jul 23, 2024
CIP-0118/README.md Outdated Show resolved Hide resolved
CIP-0118/README.md Outdated Show resolved Hide resolved
CIP-0118/README.md Outdated Show resolved Hide resolved
@polinavino
Copy link
Author

@fallen-icarus @lehins @disassembler
Some thoughts on running the Babel service alongside your mempool (as a distributed service), rather than as a centralized service :

let us assume that

  • with the right careful design, we can make sure that the phase-2 work of that service (and not just the work of the mempool on a complete zone) can always be compensated with with the collateral mechanism (e.g. each transaction in a zone has enough collateral to cover preceeding ones)

then, the risk of "having unfulfilled exchange offers hang around forever" becomes significantly reduced by being able to set your own custom Babel offers filter that only keeps certain offers around :

  • individual SPOs can have a custom filter on what partially balanced things their Babel service accepts without even validating them, e.g. they only accept offers of tokens they themselves want to trade, and are able to fulfill the exchange offer themselves
  • larger operations such as exchanges (that can afford to absorb the cost of more transactions waiting around) can have filters that accept only certain kinds of token trade offers, based on market price and popularity (as the expect the popular ones to eventually be accepted, but can throw them away after a certain amount of time anyways)

so, a Babel service discards offers they're not interested in, and also phase-1 invalid offers. Phase-2 invalid offers can go on-chain immediately for collateral collection, without the need to be fulfilled. This feels like a decent compromise and addresses multiple usecases.

@fallen-icarus
Copy link

fallen-icarus commented Aug 13, 2024

I don't think this design addresses what I consider to be the biggest issue: there is a fundamental mismatch of incentives between service providers and end-users. What is good for the babel service is bad for the end-user, and vice versa.

A distributed system with badly aligned incentives can still result in a centralized ecosystem. Ethereum's staking situation is a real world example of this. Ethereum requires users to lock up ETH, but no one actually wants to do this. So companies like Lido were created to fill the incentives gap, and as a result, ethereum is still centralized despite using a distributed network.

I think this design will end up the same way and it is the assumption itself that is the problem. Specifically, this part: each transaction in a zone has enough collateral to cover preceeding ones.

so, a Babel service discards offers they're not interested in, and also phase-1 invalid offers. Phase-2 invalid offers can go on-chain immediately for collateral collection, without the need to be fulfilled.

This is assuming end-users will actually submit "offers" to the babel network that satisfy the collateral requirement, and I think that is a completely unrealistic assumption. The collateral requirement is not possible for end-users to satisfy without coordination. Since this is supposed to be a distributed/global network, coordination among end-users is totally impractical.

What I think will likely happen instead is Company A will be created to fill the incentives gap (just like Lido). End-users will create accounts with Company A, and Company A will use its centralized servers to coordinate the collateral among users and then submit the zone to Cardano directly, totally bypassing the distributed babel service. (Why would Company A submit the transactions to the babel network when it has already coordinated the zone?) So instead of this:

flowchart LR
    Alice --> BabelNetwork
    Bob --> BabelNetwork
    Charlie --> BabelNetwork
    BabelNetwork --> Cardano
Loading

we effectively have this:

flowchart LR
    Alice --> CompanyA
    Bob --> CompanyA
    Charlie --> CompanyA
    CompanyA --> Cardano
Loading

The ecosystem ends up in the exact same scenario as with transaction swaps except it is actually worse for three reasons:

  • This design added extra complexity to cardano that goes mostly unused. It may still be used for offers without scripts, but it is unusable for offers that do contain scripts.
  • Company A will actually create the transactions on the users' behalf and ask the users to sign them (how else can the collateral amounts be coordinated?). Transaction swaps trivially enables all users to create the transactions themselves, not just transactions without script. Therefore, this design actually requires users to trust Company A more then they would have to trust companies built around transaction swaps.
  • Requiring users to coordinate collateral creates a stronger incentive for centralization than just balancing transactions against each other. This means there will be fewer companies acting as intermediaries with this design than with transaction swaps. For example, with transaction swaps, Alice can send the same swap transaction to multiple different aggregators. That is not possible with this design because the collateral requirement forces each transaction to be context dependent.

@polinavino
Copy link
Author

polinavino commented Aug 14, 2024

This is assuming end-users will actually submit "offers" to the babel network that satisfy the collateral requirement, and I think that is a completely unrealistic assumption. The collateral requirement is not possible for end-users to satisfy without coordination. Since this is supposed to be a distributed/global network, coordination among end-users is totally impractical.

The (forthcoming "nested transactions") design will operate in a slightly different way, that is somewhere between validation zones and swaps. Instead of a zone, there will be a (fully phase-1 valid) top-level transaction that contains a list of transactions (which are instead of swaps). This is how I picture a Babel service working in that case :

  1. users will submit transactions (that are not necessarily balanced, do not necessarily pay enough fee, and do not necessarily cover required collateral - this is up to the builder). This submission could be either to a specific babel fees service, or possibly to some kind of network that propagates them to multiple babel fee services (this is up to the folks building the off-chain stuff).
  2. anyone running a babel fees service that receives such a transaction is able to configure their software to either accept or reject transactions based on any property they choose, including collateral payment, fee payment, running certain scripts, exchanging certain tokens, etc. Let us assume some Babel service configuration lets transaction tx through
  3. the babel service can either
  • if they want to balance the transaction, they would build a top-level transaction tx' that covers the collateral and contains tx, and submit tx' to the Cardano network
  • if they only want to exchange some of the offer, they would add another transaction tx1, and send the list [tx ; tx1] onwards to other Babel services that might want to build a top-level transaction with [tx ; tx1]

In step 3, the service has a choice of whether to cover the collateral of tx1 to increase their chances of another service wanting to validate [tx ; tx1], or cover just their own collateral, or cover no collateral at all. They can also decide how much fees they want to pay.

The idea here is that the very minimum requirement is that that when a top-level transaction is sent across Cardano network, it has to have enough collateral. Many other constraints what on non-top-level transactions Babel fee services accept are up to the Babel fee service developers.

The complexity here is reduced by making swaps full transactions -

  • these transactions can have empty fields for everything they don't need (including collateral) so that they end up being the same size as swaps, but building them is exactly the same as building transactions, with a single CLI
  • we retain nice properties of ledger that existing software may rely on, including that the txID of a txIn is of the transaction that was was signed by constructor, and also is the pre-image of txID. Not retaining one of those properties will disrupt the operation of existing scripts!
  • the ledger rules process only transactions
  • Plultus script logic is simpler in this design including local predictability of ExUnits needed, and the ability to predict if it's even possible to make a Plutus contract validate within a given transaction

@fallen-icarus
Copy link

fallen-icarus commented Aug 15, 2024

  • if they only want to exchange some of the offer, they would add another transaction tx1, and send the list [tx ; tx1] onwards to other Babel services that might want to build a top-level transaction with [tx ; tx1]

Will the list [tx ; tx1] be signed? What is preventing a man-in-the-middle from making changes to the list, or another batcher from just using the transactions separately?

The complexity here is reduced by making swaps full transactions - ...

  • Plultus script logic is simpler in this design including local predictability of ExUnits needed, and the ability to predict if it's even possible to make a Plutus contract validate within a given transaction

I don't know enough about the ledger to comment on the ledger-related points, but I disagree with the plutus script comment. I think writing smart contracts for either transaction swaps or validation zones would be very easy (it ultimately depends on the new script context), but I would actually prefer if smart contracts always saw the whole transaction because it would enable more use cases.

Perhaps I am missing some low-level nuance, but I don't think there is a need to keep ExUnits constant between local execution and full scope execution. AFAIU the point of the ex units is to sign off on how much you are willing to pay in fees for your scripts. But in this context, even without the ex units, I can still sign off on how much I am willing to pay in fees by controlling how many assets are unbalanced in my transaction piece. I am still incentivized not to waste ex units since the babel service is incentivized to fit as many orders in a transaction as possible; they would just omit my order if I use too many ex units.

Depending on what the new script context will be, it should be very easy to write a smart contract where the ex units between the local execution and full scope execution vary only slightly. For example, if the smart contract is shown the entire transaction (ie, not just its own piece), the ledger can tell the smart contract being executed which internal transaction it is from (using an index for location in the internal transactions list). Then the smart contract can just focus on that piece. The only variability would be how many steps it takes to get to the required internal transaction. But constant-time lookups would eliminate this variability entirely.

Also, the eUTxO model guarantees high assurances for smart contracts being executed locally to have the same result in the full scope. Breaking the transaction into pieces doesn't change this. I think it would be very easy for me to update all of my DApps to execute in a piece and know with 100% certainty that it will have the same result in the full scope context (there would be slight variability in ex units as described above).

The only smart contracts that may have a different result are those that actually care about the full transaction. But if they do care about the full transaction, why would you expect the local result to be the same as the full scope result? If this kind of smart contract actually did behave the same, it would be a defective smart contract. Besides, I think this variability is actually fine:

  1. Account style DApps are already using batchers. These batchers can easily double as babel service providers and execute the account style DApp in the top-level transaction to finish balancing the internal transactions.
  2. There are possible use cases for having smart contracts see the full transaction scope even when executed in a piece. For example, Alice may not want her piece batched in a transaction where X happens. Therefore, she can create a smart contract that fails if it is executed in the same transaction as X. She can submit this smart contract with her piece to the babel service. So her smart contract may succeed locally but fail when included with a batch, but that is the whole point. This feature enables Alice to have a say in how her piece is used, even after giving her piece to the babel service. This feature isn't possible with the previous requiredTxs design since Alice doesn't know the other transactions; she just wants her smart contract to check any and all other transactions in the batch.

The role token example you mentioned before is just a result of smart contract devs taking a (valid) shortcut based on the assumption that the smart contract will always be executed against a fully balanced transaction. That shortcut obviously can't be used for unbalanced transactions, but it may not even be necessary since the inputs and outputs would be fractured across the top-level transaction and internal transactions. Instead of having to look through an aggregate list of outputs with 20 entries, it can look through both the inputs and outputs from that specific scope. This scope may only have 2 entries in each list. This actually saves ex units despite now looking through both inputs and outputs of a specific scope. Therefore, proper usage of role tokens is actually very easy and cheap to guarantee, even with transaction swaps.

@polinavino
Copy link
Author

polinavino commented Aug 15, 2024

Will the list [tx ; tx1] be signed? What is preventing a man-in-the-middle from making changes to the list, or another batcher from just using the transactions separately?

nothing - this is no different from either zones or swaps. In any of the three designs, transactions can be transmitted in whatever format off-chain, really. Lists is one option to combine them into a single data structure for transmission. Anyone making a block can always build a new top level transaction (or last zone transaction) that contains whatever other swaps/transactions they want. They just have to provide enough collateral for all transactions. Anyone checking a block will (in phase-1 checks) see whether enough collateral has been provided by the top-level tx.

The only smart contracts that may have a different result are those that actually care about the full transaction. But if they do care about the full transaction, why would you expect the local result to be the same as the full scope result? If this kind of smart contract actually did behave the same, it would be a defective smart contract.

Showing scripts within a swap the full transaction will never be advantageous over running scripts in the full sub-transactions, because if

  • contract does not care about full transaction : no difference in this case except that potentially the nested tx design will save on ex-units
  • contract cares about the full transaction : the builder of the top-level tx needs to inspect the script to see how the transaction they are building will affect the outcome. The swap builder will have to additionally transmit info like "to make my scripts validate, you need to add an output to the top level tx that spends a specific input" or something like that, whereas a sub-tx can inspect existing transactions and condition on those without expecting the top-level tx builder to try to figure out how to make your script validate
  • since ex-units required are not known ahead of time for scripts in your swap, you have no way of knowing how much you owe the top-level tx builder in script fees, and therefore cannot build a swap that is guaranteed to pay them correctly. You have to trust them to "give you change" maybe? Nested txs allow sub-tx builders to pay for their script fees (or at least to perform an exchange for some tokens that have sufficient value to cover the script fees), and they also allow the top-level builder to cover the fees.

Note that for swaps, allowing top-level transaction to affect script semantics has the effect that now they are not just the problem of the programmer or the swap-builder. Script semantics become also the problem of the builder of the top-level tx, since they have to either figure out what the script expects from their top-level transaction construction in a (hopefully) automated way, or figure out that the script can never be satisfied by any top-level transaction.

@fallen-icarus
Copy link

Note that for swaps, allowing top-level transaction to affect script semantics has the effect that now they are not just the problem of the programmer or the swap-builder. Script semantics become also the problem of the builder of the top-level tx, since they have to either figure out what the script expects from their top-level transaction construction in a (hopefully) automated way, or figure out that the script can never be satisfied by any top-level transaction.

I don't think this is an insurmountable obstacle. Here are a few options I would personally try:

  • The tx builder can supply the (open-source) scripts for users to choose from. Effectively, these scripts act as trustlessly enforced settings for how the tx builder can use your piece. Since the tx builder is the one that created the scripts, they already know what the scripts expect.
  • The tx builder can have an automated "dialogue" with the user's smart contract. Smart contracts allow users to supply their own error/trace messages. The tx builder can require these error messages to follow some standardized formatting/syntax. Then, when the tx builder validates the transaction locally, it can adapt the transaction based on the error messages given by the scripts. This process can be fully automated and doesn't require any extra information from the user. For example, if Alice does not want script X to be executed, when her smart contract sees script X, it can throw the error not script <scriptX_hash>. This message can be easily parsed and understood by a machine. In other words, the error messages themselves become a kind of simple programming language. Any pieces with smart contracts that do not follow the standardized formatting/syntax can be automatically dropped. The tx builder can demand extra in fees for this feature. I would personally be willing to pay an additional 1-2 ADA in fees (paid in another token) if I needed finer control over how my piece is used.

Showing scripts within a swap the full transaction will never be advantageous over running scripts in the full sub-transactions

I think this assertion is too strong. You are effectively arguing there is no use case for logical nots with transaction pieces. In other words, you can't make your piece conditional on something else not happening. To say there is no use case for logical nots with transaction pieces seems too likely to be untrue.

Blockchain isn't just meant for distributed use cases, it is also meant to minimize how much you need to trust other people. Consider business partners that are jointly protecting some assets using a multisig. Right now, one of the partners needs to create the entire transaction and send it around to all of the other partners to sign. With transaction pieces, it becomes possible to build up the final transaction like a negotiation. Negotiations use logical nots all the time. Since these are business partners who likely talk directly, it would be trivial to convey the smart contract requirements since the point is just to trustlessly enforce the negotiations. In fact, the smart contract requirements are likely part of the negotiations. While general multisig can be used by business partners, I think it would be better if there was a method that more accurately models what happens in the real world. Transaction pieces with logical nots are exactly that; Alice can trivially make her signature conditional on certain conditions happening and other conditions not happening.

@polinavino
Copy link
Author

polinavino commented Aug 16, 2024

Could you provide an example of the use of this not in a negotiation?

Also, could you suggest a usecase in which one would require a script that

  • conditions on a transaction that it has never seen (that is not on the missing/extra value), and
  • this condition cannot be fooled by splitting the unseen transaction into two steps tx1 ; tx2 (and showing only the latter to the swap),
    and what that condition would be?

@fallen-icarus
Copy link

fallen-icarus commented Aug 16, 2024

Businesses are not allowed to mix business expenses and personal expenses. Doing so has huge legal consequences. When you are in business with a partner, only one of you needs to mess up to put all parties at risk. Even if you didn't know about your partner's misuse of funds, you will still be held accountable.

In the business multisig example using an unbalanced piece, if I sign off on spending the jointly controlled funds and my partner uses his share of the funds for a personal expense in the same transaction, regulators can easily view this as the business paying a personal expense since that is what it looks like on the blockchain. I would then also be held liable for my partner's misuse of funds.

It would be better for me to include a smart contract in my piece that will fail unless my partner sends his share of the funds to one of his pre-approved personal addresses (or pre-approved business expenses/donations). If the funds went to my partner's personal address and then he spent the funds from this address in a separate transaction, there would be a verifiable trail that the business paid my partner his salary and then he used his income on whatever he wanted. There is zero ambiguity on whether the funds were misused. This is effectively a negotiation where I agree to release the funds as long as my partner does not use his share of the funds for personal expenses in the same transaction.

This eliminates the need for me to have to trust my partner. If I just left my transaction unbalanced (ie, it is missing my partner's share in the outputs), I still have to trust my partner won't use his share for personal expenses in the same transaction. Splitting the transaction into two steps (using validation zones) has the same limitation. In a business setting, I need to be able to put constraints on how the extra funds are used. The funds used for a personal expense cannot come from a business UTxO.

Again, I recognize this can be done with a general multisig, but this approach seems more natural and flexible. For example, imagine if I signed the full transaction, but now a change needs to be made for my partner's output. My signature is now invalid since the transaction is completely different; I need to sign the new transaction. If we used pieces instead, only my partner's portion needs to be updated and resigned. My piece is still valid and can already be used with my partner's new portion; he doesn't need to come back to me to approve the change to his output. In other words, the pieces approach makes it easier for businesses to make last minute changes to how funds are spent.


I think you are missing the point of my argument, though. Can you definitively say there isn't a single valid use case for logical nots with transaction pieces? Even if you or I can't come up with a use case, can you definitively say no one else in the world will have a valid use case for it either? I cannot which is why I don't feel comfortable choosing an option that potentially closes off valid use cases. Considering how similar the two approaches are, I would rather accept some slight drawbacks (within reason) to get the extra expressiveness that could potentially create an even more vibrant DeFi economy on Cardano.

@polinavino
Copy link
Author

polinavino commented Aug 16, 2024

It would be better for me to include a smart contract in my piece that will fail unless my partner sends his share of the funds to one of his pre-approved personal addresses (or pre-approved business expenses/donations). ”

Note here that along with the smart contract that checks this, you you have to include a specification (in some kind of language - maybe just use the Plutus contract itself or maybe something else) of the properties of other swaps that your contract should be compatible with . Then, the aggregator (top level Tx builder) will have to check every swap that comes in to see if it meets this specification of another swap(s) by running the Plutus contracts in every swap that comes in(or check them with whatever other language the specification is written in), rather than getting the full data needed for processing your sub-tx in one shot, and they have to do this for every arbitrary combinations of swaps. This compounds the problem that with swaps, there isn't even the possibility of DDoS protection via collateral, and anyone can get the aggregator to run some code that the sender themselves has never tried executing on relevant input.

This scheme also does not reduce communication (since your partner has to send a transaction to you in the case of nested transactions, and the aggregator in the case of swaps). If you yourself are the aggregator (which is likely the cheapest option for dealing with both swaps and nested txs), the two schemes are equivalent. Then, of course, there is the question of fees - they may change depending on other swaps, and that requires either trust or additional communication, such as requiring your partner to commit to paying for more exunits. If you, as the aggregator, are paying for their scripts, you still need their full transaction/swap .
 
Not knowing/having assurance of the outcome/cost of script validation sucks for both the aggregator and the sender of swaps. I do not believe that this closes off usecases, it does, however, operate on a slightly different protocol with less potential for things to go wrong

@polinavino
Copy link
Author

polinavino commented Aug 16, 2024

I will make the nested Txs CIP in such a way that you can toggle between your thing being a swap or a full tx. The difference will be whether

  • exunits are used as specified in the subTx or in the top-level tx, and
  • if you want all sub-txs to be shown to all other sub and top-level txs, or only requiredTxs
    and you can set a flag to toggle. Then, in either case we iterate over the list of topLevelTx :: subTxs as processing transactions (but assembled slightly differently). The subTxs already not not require the fields that are missing in swaps.

@fallen-icarus
Copy link

The part you quoted isn't meant to be used in a distributed scenario. There's no aggregator and there are no swaps from strangers in that scenario. It was an example of a non-distributed use case that would benefit from a logical not with pieces. It is just a more complicated/flexible multisig between parties running a company together. I can easily just convey the smart contract behavior to my business partner over the phone. Again, blockchain can also be used to minimize the trust required in interactions between parties; it isn't only for distributed interactions between strangers.

there isn't even the possibility of DDoS protection via collateral, and anyone can get the aggregator to run some code that the sender themselves has never tried executing on relevant input.

Centralized services don't need on-chain protection from DDoS. They can use api keys. When a user signs up to use my service, I would give them an api key (this can be hidden from the user so they don't even know they are using one). This approach is better than using IP addresses or specific credentials from the transaction. With api keys, I can easily rate limit the number of pieces they can send to me, and easily punish them if they misbehave (by temporarily blocking more pieces from them).

For the scenario where users include their own scripts, I would allow only one (maybe two) restriction per piece. Disallowing a specific script to be executed should cover most use cases and be very cheap to check. If a specific user tries using too many restriction, I can immediately drop their piece and punish them through the api key. The smart contract error message programming language is entirely up to me as the service provider.

If there actually is an issue with wasting a lot of energy running bad transactions that get dropped, charging a subscription for the service can easily help cover the cost. Don't forget that what is expensive to a smart contract is dirt cheap to a typical laptop. My business would have a stronger computer than a laptop which means it would take a lot for my servers to feel the wasted transaction validations.

Ideally, there would be both a distributed network and centralized service providers. The distributed network can just have the network disallow custom scripts. Meanwhile, if users want to still use custom scripts, they have the choice of using a centralized service provider. I definitely think centralized services can make use of the logical nots, even when aggregating orders from complete strangers. So I don't think we should disallow it just because a distributed babel network may not be able to use it.

@fallen-icarus
Copy link

  • if you want all sub-txs to be shown to all other sub and top-level txs, or only requiredTxs
    and you can set a flag to toggle.

Who toggles this? If it must be toggled at the top-level, then the aggregator can maliciously not toggle it. Also, specifying required transactions by name won't work since the point is to have the logical nots work on any and all transactions.

@polinavino
Copy link
Author

polinavino commented Aug 16, 2024

Who toggles this? If it must be toggled at the top-level, then the aggregator can maliciously not toggle it. Also, specifying required transactions by name won't work since the point is to have the logical nots work on any and all transactions.

it's set by the body of every transaction (top level or not), so that the ledger code will add the units provided by the aggregator if that is what you want (ie set the flag to). this setting will also allow the scripts in your transaction to see all other subTxs instead of only ones you have specified as required so you do not have to receive them in advance of constructing your own transaction, but in case of failure, only top-level tx collateral will be collected

@rphair rphair added the State: Confirmed Candiate with CIP number (new PR) or update under review. label Aug 20, 2024
@polinavino
Copy link
Author

polinavino commented Aug 22, 2024

The new README-nested.md is a first draft of Nested Transactions.
It is based on (hopefully) best of #880 and previous versions Validation zones.

  • No changes to block structure are required,
  • Instead of a zone, we have a top-level transaction that contains sub-transactions (1 level deep only)
  • An example of an additional kind of intent this design allows is given, which can be used to implement light client protocols (this is a demo of a potential approach to CPS-0015? | Intents for Cardano #779 )
  • It is possible (but not mandatory) for transactions to see all other transactions in the batch

@rphair @Quantumplation @fallen-icarus @WhatisRT
@lehins since this is heavily inspired by Swaps and VZ criticism, would you like to be a coauthor?

There are still a couple of TODO's, and a Haskell prototype (built on the real ledger codebase) will be available shortly.

@Quantumplation
Copy link
Contributor

I'm happy to weigh in on the proposal, but I definitely can't commit to being a coauthor 😅

@polinavino
Copy link
Author

polinavino commented Aug 26, 2024

This new design makes it possible to define a script which (via conditioning on transactions fixed by requiredTx), checks that a specific other script (hash) must be run by another transaction in the same batch, or that payments made by other transactions satisfy some property.

Such conditioning can be done in two ways

  • directly specifying requiredTxs for a given transaction, and also
  • (as requested) by allowing your transaction to see all others in the batch and condition on those. This second option requires you to not provide your own ExUnits data, similar to the CIP-0131? | Transaction swaps #880 design, since you cannot locally check this without knowing all other transactions in the batch (if you could, you would be back to option 1)

@polinavino
Copy link
Author

After some discussion with @willjgould and @lehins , we have come up with a way for sub-transactions to constrain the batches they are in - batch observers, see the updated README-nested.md. This is very similar to script observers and both can/will? be implemented together , however

  • batch observers can be required by any transaction (using requireBatchObservers), but run only by a top-level transaction (it runs all the ones required by itself and its sub-txs)
  • batch observers get special TxInfo versions that contain otherInfos : List TxInfo , specifying the data of all sub-transactions
  • the TxInfo for all other script purposes does not contain otherInfos
  • new script purpose BatchObs Ix

It is essentially impossible to run (calculate exUnits, etc.) batch observer scripts unless you are building the top-level tx, so it probably makes sense for off-chain code to give special instructions to the top-level tx builder in some high-level language about how to build it in a way that will satisfy the script (simpler than Plutus).

Comment on lines +63 to +67
5. a new intent we call "spend-by-output", wherein a sub-transactions may specify _outputs_ it
intends to spend, and the top-level transaction specifies the inputs that point
to those outputs in the UTxO set. This is included as a way to showcase that this
change to the ledger rules establishes the infrastructure to add additional
kids of supported intents (see CPS-15).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what this means. The use of the terms "inputs" and "outputs" is confusing to me. In my vocabulary, "inputs" are UTxOs that a transaction intends to spend while "outputs" are UTxOs that would be created by the transaction. So I don't understand what it means to "specify outputs it intends to spend" or "[specify] the inputs that point to those outputs in the UTxO set". Can you explain what you mean by this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My interpretation is that "output" is the contents (address/datum/value), while "input" is the out_ref pointing to a specific output.

  1. a new intent we call "spend-by-output", wherein a sub-transaction may specify the contents of the outputs it intends to spend, and the top-level transaction specifies the transaction references to those output in the UTxO set

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suppose tx has inputs txin1, txin2 and output o1

also, in the UTxO we have

txin1 |-> o
txin2 |-> o'\

then, we can build tx' that has spendOuts containing o, o' and output o1. A top-level transaction builder can then complete the batch with txTop, with (possibly some inputs), and

corInputs containing txin1, txin2

Then, in some sense, tx and tx'; txTop do a similar thing - remove entries corresponding to txin1, txin2 from the UTxO, and add o, o' to the UTxO. tx' would likely include a payment to whoever builds txTop for their tx-building services.

The reason we are proposing this is that if a light client (LC) is not following the chain, they would be unaware of the txins they would want to spend. If an LC requests a service provider to build a transaction for them that satisfies some specification (e.g. "pay key k the amount x of ada from key k'), the SP should be able to respond in a way that the LC does not use the data in this response to construct a new valid transaction that excludes payment to the SP. Obscuring inputs that SP can supply later makes it so that the transaction is not valid unless it comes with a top-level tx that provides the missing inputs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This design answers the question of "how can a user approve payment from their wallet without knowing anything about the chain except what the transaction spending their money is doing with their money"?

Comment on lines +429 to +435
### Open (atomic) swaps

A user wants to swap 10 Ada for 5 tokens `myT`. He creates an unbalanced transaction `tx` that
has extra 10 Ada, but is short 5 `myT`.
Any counterparty that sees this transaction can create a top-level
transaction `tx'` that includes `tx` as a sub-transaction. The transaction
`tx'` would have extra 5 `myT`, and be short 10 Ada.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you possibly be a bit more explicit with this example? This CIP talks about underspecified transactions, but doesn't really give a user-friendly example of one. I believe this new version has adopted the approach from my Transaction Pieces, right? So it would be accurate to say:

  • tx has inputs with 10 Ada and an output with 5 myT, and is left unbalanced like this.
  • tx' has inputs with 5 myT and an output with 10 Ada.
  • They are submitted together as a balanced batch with tx' as the top-level transaction.

I think I am biased by the original Validation Zones CIP that had unresolved holes, so I would personally find this example helpful to show there are no unresolved holes in this version.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this is exactly right.

@michele-nuzzi
Copy link

Hey everyone.

I didn't have a clear idea on this CIP for a while but recently I'm thinking that it doesn't bring a lot of value especially considering the cost of implementation (impact on the ledger etc...)

Most of what validations zone could achieve could instead be achieved leveraging utxo contention.

We can have multiple transactions spending different utxos of a contract, but the same utxos of a given user.
All these transactions are mutually exclusive and the outcome is indeterministic until one of the transactions makes it to a block.

If someone is worried about contention on the contract, the more utxos the contract can spend (and the user can use in mutually exclusive transactions), the less chance of contention by the contract.

This is effectively moving the load from the ledger to the consensus, that in theory should already be able to handle, without hardfork.

@michele-nuzzi
Copy link

michele-nuzzi commented Sep 26, 2024

BTW, I see and understand how validation zones could turn useful, and I not entirely opposed to it.

Just wanted to let you know of potential alternative possibilities

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category: Ledger Proposals belonging to the 'Ledger' category. State: Confirmed Candiate with CIP number (new PR) or update under review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants