Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPS-???? | Mutable shared state and atomicity are compatible #874

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

klntsky
Copy link
Contributor

@klntsky klntsky commented Aug 5, 2024

Full determinism of transactions, defined by the absence of shared mutable state, gives rise to valuable ledger properties, such as:

  • atomicity (a transaction is either fully accepted or fully rejected without a fee)
  • predictability of monetary changes and fees

However, it comes at a cost:

  • The need to "re-introduce" non-determinism where it is required for the dApp logic (which is commonly done by using multiple transactions for a single user action) makes immediate execution of user actions impossible.
  • UTxO contention limits the number of concurrent users a dApp can have, because a single UTxO can be consumed by a single transaction only.

Completely avoiding mutable shared state may be a suboptimal way to achieve the desirable properties, as the design space for alternatives has not been fully explored.

(Rendered version)

@rphair rphair added the Category: Ledger Proposals belonging to the 'Ledger' category. label Aug 6, 2024
Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@klntsky from my level this seems like it has material in common with Intents / Validation Zones and the current discussion behind the latter CIP (please correct me if I'm wrong): #862

@fallen-icarus maybe you could look at this from the other direction & let us know if this CPS relates to indefinite / incompletely specified transactions as per your concurrent posting: #873

I've added it for introduction at tomorrow's regular CIP meeting (where we can brainstorm a more concise title): https://hackmd.io/@cip-editors/94

CPS-XXXX/README.md Outdated Show resolved Hide resolved
CPS-XXXX/README.md Outdated Show resolved Hide resolved
CPS-XXXX/README.md Outdated Show resolved Hide resolved
CPS-XXXX/README.md Outdated Show resolved Hide resolved
CPS-XXXX/README.md Outdated Show resolved Hide resolved
@michele-nuzzi
Copy link

I'm sorry if I sound harsh, but I don't believe this CPS should be discussed.

Development becomes more complicated because of the need to "re-invent" non-determinism where it is required for the dApp logic

I'm not sure I can think of a use case where non-determinism is required in principle.

Current designs that fall back to such "reinvented non-determinism" are the result of a previously immature ecosystem (plutus v1 + lack of specific knowledge on eUTxO), having to "borrow" non deterministic designs coming from other ecosystems.

But in principle, non determinism is NOT a requirement.

Yes determinism might need us to think a bit harder because there were no established patterns, but as the ecosystem is evolving we are coming up with more and more eutxo-friendly deterministic patterns.

And in doing so we are preserving a silver bullet in terms of security of the protocols running on this deterministic design.

Security that other ecosystems can not achieve this easily.

UTxO contention limits the number of concurrent users a dApp can have

Again, this is mostly true only because so far the "established" designs tend to "borrow" designs NOT INTENDED for utxo. Paying the price of applying a synchronous model, where instead we should be thinking in a parallel model.

Many examples of contracts being able to handle concurrent users are present, the most naive I can think of is the NFT marketplace contract.

Copy link

@fallen-icarus fallen-icarus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to agree with @michele-nuzzi (although I'm still open to discussing it).

I'm not sure I can think of a use case where non-determinism is required in principle.

I've built a DEX, lending/borrowing protocol, options trading protocol, and aftermarket protocol. This is a full DeFi stack, and not once did I wish for non-determinism.

I think all of the problems you've mentioned are actually due to most DApps developers still not knowing how to properly use the eUTxO model. This isn't something that can be figured out in a year or two. I literally just opened a CIP that shows the eUTxO model actually enables securely breaking apart transactions; doing so would trivially enable babel fees and high-frequency off-chain trading. I didn't realize it was possible until only recently. Perhaps this CIP can actually help solve your batcher problems?

Every DApp developer I've seen argue "we need non-determinism" is also doing something that I think could be better done another way. Is the problem that we don't have non-determinism? Or is the problem they are misusing the eUTxO model? My experience and understanding make mean lean very heavily towards the latter.

Personally, I wouldn't consider sacrificing any determinism for another few years. I really don't think it has been given a fair chance yet. If we can figure it out, determinism has way better security guarantees than non-determinism.

However, it comes at a cost:

- Development becomes more complicated because of the need to "re-invent" non-determinism where it is required for the dApp logic
- UTxO contention limits the number of concurrent users a dApp can have
Copy link

@fallen-icarus fallen-icarus Aug 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a universal truth. This is only true for concentrated DApps (eg, liquidity pool based DApps) which are not taking advantage of the eUTxO model. Each seller can have their own UTxO which makes concurrency as high as the underlying market (trying to increase beyond this can lead to economic instability).

I haven't touched this CIP in a while, but distributed DApps don't have this problem. Distributed DApps may have a lower throughput per transaction, but since these transactions can be submitted in parallel, they can easily have a higher throughput per block than concentrated DApps. (I'm not a software engineer so there is likely still a lot of room for improvement with distributed DApp throughput.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each seller can have their own UTxO which makes concurrency as high as the underlying market

The problem is that each buyer will try to use the order with the best price available. No matter how many UTxOs you create in an order book, it will be a probabilistic game without batchers, but it doesn't have to be in principle. By parallelizing you can only sidestep the problem, not solve it, which is not really viable, as EVM-based AMMs can just provide better guarantees: they can offer immediate execution no matter the number of users. Aiming at something less than that means losing, because the quality standards are not being set by you


## Use cases

### Automated market maker DEX

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am against AMM DEXs as the dominant DEX architecture:

  • They directly undermine the security assumptions of Proof-of-Stake since users are forced to give up most, if not all, delegation control, voting control, and custody of their assets
  • They do not allow users to specify their own prices which makes them extremely economically inefficient
  • They are fundamentally centralized since updating/maintaining the liquidity pools is controlled by nothing more than a multisig; governance actions are not trustlessly enforced

Using AMMs as the foundation of a DeFi economy (especially one secured by, and governed by PoS) is a really bad idea. I honestly struggle to even call this an opinion due to how strongly I believe this.

The problem is not the lack of immediate execution. We actually don't need immediate execution; we only need "fast enough" execution which future improvements to Cardano are likely to provide. IMO the main reason Cardano's DeFi has not really taken off is because most DApps are still using liquidity pools for everything and a significant number of people do not like the downsides I mentioned above. They are still waiting on the sidelines.

(Apologies, but this topic really triggers me 😅)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They directly undermine the security assumptions of Proof-of-Stake since users are forced to give up most, if not all, delegation control, voting control, and custody of their assets

It's possible to use own staking credentials with dApp's script as payment credential

They do not allow users to specify their own prices which makes them extremely economically inefficient

Existing AMM DEXes allow placing orders as a side-feature. An order book existing alongside an AMM is how uniswap does that.

They are fundamentally centralized since updating/maintaining the liquidity pools is controlled by nothing more than a multisig; governance actions are not trustlessly enforced

Any governance scheme can be attached to any protocol.

Liquidity pools without an update mechanism are possible. An order book that lets the admins do something with the orders is also possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, AMM DEX is just an example here. An order book suffers from the same problem: an order can be matched only by one counter-party.

@klntsky
Copy link
Contributor Author

klntsky commented Aug 6, 2024

@fallen-icarus

Perhaps this CIP can actually help solve your batcher problems?

That CIP does not address UTxO contention, because even though Txs can be assembled piece-wise independently, the UTxOs they ought to consume in the end can only be consumed by a single transaction. There can only be as many swaps as there are UTxOs, while mutable shared state would allow for as many swaps at the same time as the settlement layer allows

@klntsky
Copy link
Contributor Author

klntsky commented Aug 6, 2024

@michele-nuzzi

Current designs that fall back to such "reinvented non-determinism" are the result of a previously immature ecosystem (plutus v1 + lack of specific knowledge on eUTxO), having to "borrow" non deterministic designs coming from other ecosystems.

The ledger changes since Plutus v1 did not let us use batchers any less. Do you have a counter-example?

But in principle, non determinism is NOT a requirement.

Type-II non-determinism is in the product requirements of the core DeFi primitives: AMM DEXes, lending/borrowing, liquidations, etc. These are the cases where the user shouldn't and can't possibly know the outcome of his action within a dApp.

@klntsky
Copy link
Contributor Author

klntsky commented Aug 6, 2024

@michele-nuzzi

And in doing so we are preserving a silver bullet in terms of security of the protocols running on this deterministic design.

What particular aspects of security do you have in mind? I can show an example of how determinism affects the security in quite a catastrophic way (this will be the topic of my next CPS). The DAO hack mentioned in the eUTxO paper does not count, it's obvious that mutable shared state was not the culprit, it was their particular API design choices around it.

Many examples of contracts being able to handle concurrent users are present, the most naive I can think of is the NFT marketplace contract.

The fact that the ledger design is good enough for something does not mean that it is good enough for everything in that category.

Your argument can be rephrased as "immediate execution in the presence of dApp-layer non-determinism is not a valid use case", right?

@rphair rphair changed the title CPS-???? | Full determinism of transactions is unnecessarily restrictive for DeFi CPS-???? | Relaxed Transaction Determinism Aug 6, 2024
- introduce an excerpt from the paper
- add a note about collateral loss to the AMM DEX example
Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@klntsky I've retitled the PR based on the words I recalled people using in the debate of this issue in the last hour's meeting. It's vital that we keep CIP titles not only concise (the original title would have been the longest) but free from bias: especially since the confining effect on Cardano DeFi has not been objectively established yet.

As I said at the meeting I'm happy to see this discussion as a counterpoint to the proposed design patterns that would offer transaction flexibility:

... without sacrificing Cardano's "unique selling proposition" of determinism. I would recommend promoting this to a CIP candidate if & when a use is documented that cannot be satisfied by a fully deterministic design pattern.

CPS-XXXX/README.md Outdated Show resolved Hide resolved
@klntsky klntsky changed the title CPS-???? | Relaxed Transaction Determinism CPS-???? | Mutable shared state and atomicity are compatible Aug 7, 2024

## Problem

It is impossible to build a dApp that has these three properties at the same time on a fully-deterministic ledger:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the key to scaling solutions on Cardano without compromising its security and determinism lies in building layered solutions. Bridges and other scaling tech can help us batch transactions, reduce load on the main chain, and improve overall performance without giving up on the ledger determinism. In my view this is not a problem, but a feature of Cardano, and we need to find ways and adapt our solutions with that goal.

@zliu41
Copy link
Contributor

zliu41 commented Aug 21, 2024

cc @lehins

@rphair rphair added the State: Unconfirmed Triaged at meeting but not confirmed (or assigned CIP number) yet. label Aug 21, 2024
@colll78
Copy link
Contributor

colll78 commented Aug 22, 2024

In general I agree that our ecosystem would benefit greatly from some global state mechanism, after all the whole original pitch of Cardano was to be a hybrid ledger (this is why we already have global state in the form of our staking / rewards system). I just think the suggested approach in this CIP is wrong. Instead, we should just expand the existing account system (reward accounts) with the introduction of accumulators which provide the ability to store and manipulate data encoded types. Then we can simply expand the existing interface that we have for interacting with the account portion of our ledger i.e. we have

TxCertRegStaking V2.Credential (Haskell.Maybe V2.Lovelace)

We can add:

TxCertRegAccumulator V2.ScriptCredential (Haskell.Maybe V2.Lovelace)

Where V2.ScriptCredential is a script credential derived from the script that manages the accumulator i.e.:

-- | Simple accumulator to support merging value to a data encoded integer
accumulatorScript :: BuiltinData -> BuiltinData -> BuiltinData
accumulatorScript acc new = BI.addInteger (BI.mkI acc) (BI.mkI new)

Then actions can be processed on the accumulator via:

TxAccumulatorProcess V2.ScriptCredential BuiltinData

Where BuiltinData is the new value to be processed by the accumulator.

The last piece required is TxAccumulatorRead V2.ScriptCredential which simply provides the accumulator value to the script context as an entry in a map (in a new field ex accumulatorReads :: Map ScriptCredential BuiltinData).

I think the above covers everything this CIP seeks to achieve without introducing additional complexity to the UTxO portion of the ledger (we isolate this non-deterministic computation into the already non-deterministic account portion of the ledger). Likewise, we can use the accumulatorScript as a phase-1 validation check to avoid script failure onchain i.e. if the accumulatorScript associated with A :: ScriptCredential does not error when processed in the block then transaction that contains TxAccumulatorRead A should succeed. By using accumulatorScript executed with TxAccumulatorProcess V2.ScriptCredential BuiltinData as the phase-1 guard script, we can preserve the deterministic script evaluation property for all other script executions and thus guarantee collateral does not need to be consumed unnecessarily.

The detail is in the implementation. In case of Cardano that state is explicitly passed around from one transaction to another, while Ethereum actually does the global mutation.
In other words this is cardano:

foldl xor False [True, False, True, False]

while Etherium is

state = False
for v in [True, False, True, False]:
  state = xor v state
end
  • What properties of the ledger that stem from determinism are really valuable?

One powerful property is that it enables a large number of expensive operations to be done entirely offchain. For instance, linear search should never be performed in onchain code under any circumstance ever because it completely throws away what is perhaps the biggest advantage that Cardano's smart contract platform has over those in other ecosystems, namely the deterministic script evaluation property. We made huge design sacrifices to obtain this property so to not take advantage of it frankly would be like running a marathon with ankle weights. By taking advantage of this deterministic script evaluation property, we never have to perform search onchain because we know what the list looks like at the time of transaction construction we can just pass in the index where the item we are looking for should be and fail if it is not there. We can lookup anything in O(1) checks/boolean-conditions by providing the onchain code with the index (via redeemer) to where the element you want to find is supposed to be and just erroring if the indexed element is not indeed what you are looking for. The fact any element can be found onchain without linear-search is an extremely powerful property of our smart contract platform that simply doesn't exist outside our ecosystem. This doesn't just apply to search. For example one demonstration of the strength of this is that for any problem in NP (Nondeterministic Polynomial Time) we can solve it in P onchain by calculating the solution offchain and providing it via the redeemer, and then simply verifying the correctness onchain. This is only possible because of the deterministic script evaluation property which guarantees that all inputs to a script are known at tx construction. This is not possible in systems with non-deterministic script evaluation because the solution that they provide during tx construction may be invalidated by the time their transaction is processed in a block.

Another obvious benefit is the application of zero-knowledge proofs especially zk scaling. For instance, currently we can transform any smart contract f :: ScriptContext -> BuiltinUnit to an equivalent zero-knowledge version with unbound ExUnits and script size (complexity of the contract doesn't affect fees or ExUnits). This is possible because when building a transaction we can construct a proof that we successfully executed smart contract f on the transaction's script context A because the deterministic script evaluation property guarantees that A is fixed and known. The actual smart contract that we interact with onchain is simply a verifier that accepts a proof as input and returns true if the proof is verified or false if the proof is incorrect. If we introduce non-determinism, then during transaction construction it no longer becomes possible to construct a proof that we successfully executed smart contract f on script context A because, at this time, the values of A are unknown due to the non-determinism.

  • Is it possible to extend the current ledger with mutable shared state?

Yes, in-fact we already have a form of mutable shared state. See global-state (although this global state will only be meaningful to Plutus scripts when Conway II is released which allows for staking scripts which require script execution for registration, in Conway I the execution of the associated staking script is optional for registration).

  • How to process rollbacks efficiently in the presence of mutable shared state?

In the model proposed above, it would be handled the same way that we currently handle mutable shared state (reward account registration / deregistration).

@fallen-icarus
Copy link

fallen-icarus commented Aug 26, 2024

What about how this impacts Ouroboros Leios? According to the original paper, it depends on the assumption that the majority of transactions are independent of each other.

For general purpose ledgers where we can assume most in-flight transactions are independent ... With this approach, conflicting transactions will reduce the effective throughput, because they get processed only to be later discarded. However if the rate of conflicts is not too high then this is a reasonable trade-off.
-- Section 3.2.3

It also requires that checking for conflicts must be possible without having to run the actual smart contracts:

Importantly, to preserve parallelism, the detection and discarding of conflicting transactions must be cheap relative to the cost of executing the transactions (e.g. running scripts and checking cryptographic signatures)
-- Section 3.2.3

Section 3.2 of the paper concludes with this:

Fortunately UTxO-style ledgers are well placed to address these issues. This is due to the fact that transactions in UTxO ledgers explicitly identify all of their inputs and outputs up front, and those dependencies are complete. They are complete in the sense that there are no other ‘side effects’ of the transaction other than the explicitly identified inputs and outputs. This makes it straightforward to identify transactions that conflict with each other. It also means that – provided the dependencies between transactions are respected – the transactions can be reordered without changing the results. This makes it possible to do the serialisation procedure outlined above correctly, and to do so relatively cheaply.
-- Section 3.2.4

While technically the rewards are a form of global state, they do not create any dependencies between transactions since the only possible thing you can do is withdraw the balance. Currently, smart contracts can't even check the available balance. AFAIU It sounds to me like the changes being proposed would make it possible for a large number of transactions to become dependent on each other in the sense that the order of execution now matters more and requires running the smart contracts to determine if the specified order is actually valid. In other words, it would result in the assumptions underlying Leios being consistently violated.

@colll78
Copy link
Contributor

colll78 commented Aug 26, 2024

@fallen-icarus
While technically the rewards are a form of global state, they do not create any dependencies between transactions since the only possible thing you can do is withdraw the balance.

This isn't quite correct. In addition to withdrawals, registration and deregistration also act on global state, and unlike withdrawals they can indeed create dependencies between transactions. For instance, consider the following:

dexOrderCS :: CurrencySymbol
dexOrderCS = "deadbeef"

dexOrderTN :: TokenName 
dexOrderTN = "OrderToken"

-- assume DEX order minting policy enforces that this script executes (i.e. by checking that a redeemer with the matching DepCredential is present in txInfoRedeemers)
globalStateCredential :: ScriptContext -> BuiltinUnit
globalStateCredential ctx = 
  case (scriptContextScriptInfo ctx) of 
    CertifyingScript 0 (TxCertRegDRep _ _) -> check dexOrderTokenMinted 
    TxCertUpdateDRep DRepCredential -> BI.unitval
    CertifyingScript 0 (TxCertRegDRep _ _) -> check dexOrderTokenMinted
    _ -> error 
  where 
    txInfo = scriptContextTxInfo ctx 
    dexOrderTokenMinted = valueOf (txInfoMint txInfo) dexOrderCS dexOrderTN == 1
 
 --- Return the DRepCredential derived from the above script
 globalStateDRepCred ::  DRepCredential
 globalStateDRepCred = ... 
 
-- Minting policy succeeds if and only if total number of DEX orders placed is odd.
readGlobalState :: ScriptContext -> BuiltinUnit 
readGlobalState ctx =
  case (scriptContextScriptInfo ctx) of 
    Minting _ -> 
      if (head certs == TxCertUpdateDRep globalStateDRepCred)
      then BI.unitval 
      else (traceError "Even number of DEX orders") 
    _ -> error 
  where 
    txInfo = scriptContextTxInfo ctx 
    certs = txInfoTxCerts txInfo                            

The success or failure of the readGlobalState minting policy depends on the order in which transactions are processed in the block.

Sure, smart contract execution isn't needed to determine validity, but smart contract execution itself isn't the issue that Leios is concerned with, instead the important part is:
the detection and discarding of conflicting transactions must be cheap relative to the cost of executing the transactions
What this means in the context of our architecture is that detection of conflicting transactions must be done cheaply in Phase-1 validation. This property is preserved simply by imposing an extremely restrictive ex-unit budget on accumulatorScripts.

@fallen-icarus
Copy link

fallen-icarus commented Aug 27, 2024

@colll78 The even or odd example seems too contrived to be a real use case. Why would a dex care whether the total number of transactions in a block is even or odd? Are there any DApps that actually care about this?

Still, perhaps my wording could have been better since I concede it is technically possible to create dependencies using certificates right now. However, I don't think my conclusion is wrong since doing so doesn't seem useful (isn't the lack of usefulness the point of this CPS?). Even if there are some use cases for it, I think the overwhelming majority of transactions will not bother using certificates like this due to it being useless in most contexts. So IMO the assumption that most transactions will not depend on each other is still a safe assumption right now.

What this means in the context of our architecture is that detection of conflicting transactions must be done cheaply in Phase-1 validation. This property is preserved simply by imposing an extremely restrictive ex-unit budget on accumulatorScripts.

You definitely know way more about software engineering than myself, but AFAIU I am skeptical that restricting the ex-units is enough. Currently, isn't it enough to just check if the input still exists which is 0(1)? There is no need to do anything else aside from lookup the input in the UTxO set (I am assuming this is a map). But if you now need to check the accumulator smart contracts, you also need to deserialize the smart contracts before you can even run them. Wasn't this the problem with the reference script ddos you thwarted (by de-registering the staking credentials)? I think someone from the consensus team should weigh in on this, but I'm not sure who to tag.

I don't mean to come across as universally against mutable state; I just want to make sure including it on L1 doesn't sacrifice anything meaningful. For you, the pull to cardano may have been the promise that it would be a "hybrid ledger", but I actually don't know why you think that was promised. The eUTxO paper quoted in this CPS literally says it "forgoes any notion of shared mutable state" due to it creating difficulties when trying to scale the blockchain. One of the main reasons I came to Cardano was precisely because I thought there would be zero global state for 1) the scaling issues and 2) I don't think it is necessary for layer 1. Even TradFi has global state on layer 2 (ie, cash+coins are layer 1 and the banking system is layer 2).

@colll78
Copy link
Contributor

colll78 commented Aug 27, 2024

I agree the example I gave above is indeed contrived, purposely so, since it is learning material that I created to illustrate how the design pattern works. There are indeed practical applications of this design pattern, a single credential is essentially global state over a single bit. Thus the above example manages only a single bit, you can however, extend it to manage multiple account credentials which together form a sequence of bits of global state which can be used for more complicated examples. For instance, typically the maximum signers on a multi-sig in a single block is limited by the constraints of a single transaction. If you use this design pattern, you can create a multi-sig contract that is parameterized by a set of credentials and succeeds only if all credentials are registered, each credential script is parameterized by its own set of signers, then for each credential script registration succeeds if and only if a majority of its signers signatures are present then you have a very large multi-sig that can check for a majority signature of a large set of participants in a single block. You can similarly extend the odd, even example (modeling it as a bit sequence) to keep track of the total number of orders that were placed a the block, which you can then use to for instance adjust a lending rate or adjust protocol fees based on utilization.

Also keep in mind that this isn’t exactly “shared mutable state”, instead that state is explicitly passed around from one transaction to another.

As for the concerns regarding scripts during phase-1 validation, O(1) just means constant upper bound time-complexity. Both executing a Plutus script and looking up an input in the UTxO set are O(1), since the running time of Plutus scripts are constrained by ex-units. The difference is the size of the constants. We already have a small efficient scripting language with low enough constants to fit in phase-1 validation that is native scripts! Deserialization and serialization are done to improve storage efficiency and preserve network bandwidth, if we impose very strict constraints on size and ex-units of these accumulator scripts I don’t see why they would need a compact representation, the storage cost already takes into consideration size in bytes, instead we can just store them directly in flat UPLC encoding.

@fallen-icarus
Copy link

fallen-icarus commented Aug 29, 2024

For instance, typically the maximum signers on a multi-sig in a single block is limited by the constraints of a single transaction. ... If you use this design pattern, ... you have a very large multi-sig that can check for a majority signature of a large set of participants in a single block.

Unless you are the stake pool operator that is creating the next block, you don't have control over which transactions will be added to the block. It seems very likely for some transactions to be added to the block while others are omitted which means the multisig will likely fail even if it should succeed. Furthermore, because you don't have control over which transactions are added to the block, this approach is susceptible to a man-in-the-middle attack where the stake pool can deliberately omit certain transactions from the block to control the outcome of the multisig. Unless I am misunderstanding something, I don't see how this approach is practical at all.

Besides, you can already get around the constraints of a single transaction by using native assets instead of registration statuses. For example, you can group signatures into sub-transactions and mint a symbolic "Yes" token if the sub-transaction's multisig succeeds (the minting would be governed by a separate script to ensure all sub-transactions can mint the same "Yes" token). Then, you take a top-level transaction that spends each of the UTxOs with the "Yes" tokens from the sub-transactions, and burns the "Yes" tokens. If there is the minimum required number of "Yes" tokens being burned, the overall multisig would succeed. This approach is very similar to the registration approach except no global state is required at all, and the sub-transactions and top-level transaction can appear in separate blocks. This approach is also easily scalable since native assets are small enough to easily have 10s of thousands of "Yes" tokens in a single transaction (the tokens can be consolidated into a single UTxO before executing the top-level transaction).

You can similarly extend the odd, even example (modeling it as a bit sequence) to keep track of the total number of orders that were placed a the block, which you can then use to for instance adjust a lending rate or adjust protocol fees based on utilization.

From an economic perspective, I think it is a really bad idea to base prices off of the total number of orders processed in a previous block. Prices need to be based off of the current ratio of supply and demand. If a block had a sudden burst of 100 orders, using this approach, the prices should increase. But what if there are only 5 orders remaining to be filled (ie, there is very little demand left to fill)? This is actually the time when the prices should decrease since the supply now dwarfs the demand. So now you are incentivizing fewer orders at the exact time the DApp actually wants more orders.

As another example, lets say there is a liquidity source that gets drained by 90% by one order in the previous block. It was only one order so the prices should decrease. But there is only 10% of liquidity remaining which means the prices should actually increase to disincentivize new orders and incentivize new liquidity. The total number of orders processed isn't relevant for determining prices.

And again, since you don't have control over what transactions actually go into the block, this use case can possibly be gamed by stake pool operators (eg, deliberately minimizing the number of orders per block to keep prices down). I don't see how this doesn't lead to economic instability and market distortions, so I don't consider this a practical use case either.

EDIT: I think it is worth pointing out that oracles can be used if a DApp really does care about the number of orders processed in a previous block. So in both use cases mentioned, the same niche can already be satisfied without using any kind of global state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category: Ledger Proposals belonging to the 'Ledger' category. State: Unconfirmed Triaged at meeting but not confirmed (or assigned CIP number) yet.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants