-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better communication/clarity on phase-1 validation and serialization criteria #2941
Comments
Hi @peter-mlabs , thanks for bring this up, this is a great topic. What is "phase 1 validation"The definition really is "everything besides script execution", but that's not vague at all. If you look at the Alonzo specification, Figure 9, the line:
is what we mean by phase 2. The rest of the spec is phase 1. The specs contain a lot of plain English prose describing everything, but please let us know if you think something is missing. You say that the specs are too much of a deep dive, but that is our best effort of explaining the ledger validation. For folks creating transactions (as opposed to blocks), the topmost rule for validation is the I am myself unfamiliar with the "Ledger Explanations" webpage and Invariants/constraints ensured by serializationThis is a very interesting topic, and one that comes up a lot when we make the rules. We do have a wire specification for every ledger era (See the table at the top of this repo's readme, the CDDL column provides links to all the wire spec), but sometimes our schema cannot be fully captured by CDDL. We try to list these constraints as comments in the CDDL file, but I'm sure there are huge gaps. Filling in these gaps sounds like a great idea to me. I'm confused, though, why you mention validating Alternate implementations considered invalid by IO due to behavior that is not specified.Our goal has always been that the formal ledger specs, together with the wire specification, provide all the details needed for an alternate implementation. We may fall short of this goal, but that's always been the motivation. If you find gaps, I would be thrilled to get github issues for them. Having an alternate implementation would extremely healthy, as we'd have more confidence in how well we've achieved our goal. Set of checksSince phase 1 validation is the bulk of this codebase, this quite a task. That said, we do have a lot of re-usable code, and maybe in the future we can try steer things towards re-usable checks for Plutus developers. For example, here is a hunk of the Alonzo |
Regarding I realize the response to github issues has been spotty, but we really do want feedback on problems with the emulator. We have increased staffing and are trying to stay on top of the github issues. |
Thank you both for your prompt responses.
Understood. I'll be taking a closer look at these myself, but its been a perennial re-hashing that any time someone's been tasked on "figuring out what phase-1 is", its been a struggle -- perhaps after I've had a look myself, I'll have more direct insight into what, if anything, could be improved.
Just the Read The Docs page here (linked in the README of this repo). I think it would be a logical place for saying things "Values are normalized" and explaining exactly what that means -- with respect to non-duplicate entries, sorted entries, always containing an ADA entry, etc.
I am aware -- and fully agree -- that this terminology is imprecise, but I don't have a great vocabulary to distinguish my intended meaning. What I mean by "validating a TxInfo" or "validating a ScriptContext" is:
For instance, including an invalid currency symbol in a transaction would produce an The crux of the issue, as I see it, is that there are inhabitants of the types that (in a perfect world) wouldn't be possible to construct; but we don't live in that world, and thus there are certain values of those types that are "invalid" for a given use case. And to be explicit, the reason this is difficult to develop around is because it's misleading about what we do need to check for in our scripts. If it is legal for non-normalized I was under the impression that doing so wasn't intended to be a prerequisite for contributing to production-worthy script development. If you disagree, and do think that the specification and CDDL should be required reading for all smart contract developers, I won't take offense and will adjust my expectations accordingly. This extends beyond just the fully-implemented validator or minting policy level, too -- there's no reason to be quickcheck- Does this all make sense? Another relevant issue from a coworker is here.
Continuing on the above, and forgive my vague understanding of all of this: the "phase 1 checks I care about" seem like they extend beyond things that are represented in the code base as actual checks, and do indeed get into a more vague territory of serialization. It's not so much a is-this-transaction-valid "validation" that I'm concerned with, as it is a "is this value of this type something that my plutus script could possibly ever encounter". So I suppose that what I'm looking for may not be what is traditionally considered phase 1 validation, but still falls under the definition of "everything before phase 2". What vocabulary would you use to describe the situation here?
Very glad to hear that more effort is being put into this. We've been working with (and on) various tools that were meant to fill in some of the gaps, but none have quite lived up what we hoped We're working on PCB, and this is the area where the issues above have bitten us most recently. We're finding it useful to partially specify incomplete, obviously invalid script contexts to feed to scripts at certain stages in development. For instance, if a script should validate based only on what is contained in |
It's probably true that we never explicitly state that "phase-1 failures" are all the failures except for the single phase-2 failure. It's not something that needed a formal definition in the spec, but it is language that we use a lot and I agree that that point should be explicitly stated somewhere.
Ah! That documentation is very incomplete and really only explains the multi-assets. The point about "Values are normalized" is mentioned in the spec (you can find in page "CAUTION" in the Alonzo spec to find it), or at least the ADA part. Perhaps we can add more details there.
So you would like to know which parts of the transaction context are in the ledger state vs those that are in the transaction? (and those in the middle, like how transaction inputs are "resolved" but the UtxO to include the outputs inside the context, for example).
I think I am following, and I'm sorry about all your struggles. I don't think such a function is possible without changing the signature to something like
When developing the ledger code for the Alonzo era, we were assuming that the Plutus tooling would save folks from the nuances of things like worrying about
I agree with you, as I said above, the plan was for the tooling to be a shield from worrying about these details.
It does, I fully agree that there is an incredible amount of technical details surrounding the transaction context. I'm sorry it's still not made easy by the tooling.
You very well might be correct. A couple example, if you have them easily at hand, would be nice.
I myself probably say "phase 1 validation" somewhat informally to mean any of the ledger rules except running the Plutus evaluator. And I say things like "that is guarded on the wire" referring to things you cannot do because of the logic in the deserialization. I think the Plutus team uses the same language. It's very enlightening to see the troubles you are having, and concepts that were not obviously name-worth to use when developing the ledger for Alonzo. I myself have only written pretty trivial Plutus scripts, but I would love to do more and gain more empathy for the hard parts. |
I think we're getting on the same page here!
I don't think that is what I mean. I want to know which values/inhabitants of certain types or in certain contexts are "invalid" from either the perspective of (using your terminology) "phase 1 validation" or "guarded on the wire", and specifically why they are invalid. For instance, (pseudo-code) v :: Value
v = { "ADA" :
{ "" : 0 }
, "FOO" :
{ "BAR" : 100
, "BAZ" : -100
}
} would be a "valid" inhabitant of the But this would be "invalid" if it was used as the value of a This means that when we're writing scripts, and doing something akin to And I want to know this, because if I'm writing a function that operates on
this function would be valid to include in scripts if it's called only on values like So your comment that
makes sense, but I don't know if we're talking about the exact same thing here. I want to know if, given any possible ledger state, and any possible spec-compliant implementation, whether I'd ever launch the nukes; I can tell just from the fact that I'll try to keep this issue updated as I read through the specs and possibly get some more concrete examples. |
yes indeed, it's becoming clearer and clearer! I think the following might help (and I can try to get to it early next week): Go through each type inside |
For reference, the Plutus V2 context looks like:
These types are from the Plutus library, and the ledger translation into them. The Plutus types do protect the users from some of the details mentioned in this issue (such as whether or not a
I think the TLDR here is that Did this help? |
Thanks, Jared -- this is a very good start to what we're looking for. A few comments follow below. I'm making some assumptions, so please flag if anything seems out of place -- its entirely possible that things that I'm inferring as "today's convention" were intentionally left open-ended.
Just to clarify, the
Also:
Are you saying that there is another Plutus library with a
Should the Shouldn't the Additionally, the
Also, the This also can't be empty -- this was noted in one of the 30-something CIPs, because unless this is said explicitly, there wouldn't be anything preventing someone from paying fees via rewards withdrawals.
As above, it seems like there are restrictions on "valid" WRT datums, the WRT scripts, Also, (in contrast with the This probably can't be empty, right? Is there some weird situations with staking, perhaps, where a UTxO containing the exact amount needed for the fee could be passed as the input and not need to list an output?
Must contain
Also must be positive
Agreed that this is lower priority for now, but part of the reason nobody cares about these is because nobody understands them 😂. Some immediate notes:
I don't think there's much new here that hasn't been covered above, but to be explicit:
I've not dealt with this type much, but it is defined as
Aside from the comments about hashes vs. any
I disagree that we can't rule things out. For
For the Redeemers, I assume that this is indeed unrestricted (assuming the constructor(s) for But for the map in general, I'm assuming there are additional restrictions. Off the top of my head:
This does sound tricky. I'm not sure of the details, but perhaps a validation function could take a witness function or phantom type to ensure I think I also read in the inline-datum CIP that "extra datums" could be passed. There's been some recurring folklore for new comers that this means
but what it really means (as I've been told) is that a datum that is not strictly necessary for validation can be included provided it is attached to a transaction input or output. So the Is this map sorted? Duplicates are probably not allowed, right? The actual rectification of this issue might be more in the domain of
And regardless of whether the It seems, to me, that the overall solution here is probably:
Thoughts? |
I was vague, sorry. By canonical form, I meant handling the zero valued fields. I thought this what was normalizeValue was for, but poking around I don't see it being used. I see master is already different than the release branch, so yea, maybe bring that up with the Plutus folks.
The transaction ID is a blake2b 256 hash, so 32 bytes.
You are correct. apologies that I'm more familiar with the ledger types (where it is a Word64) and was mostly going from memory above.
That's true, I was focusing on the TxIn and forgot about the container. this is a ledger rule. It's not enforced to preventing someone from paying fees via rewards withdrawals. It's the replay protection for everything inside a transaction.
That's true, in the ledger it is a set, making these properties much more obvious. But this information is lost in the translation to the script context.
The ledger may not (unless you count
indeed, it is a list in the ledger
It can indeed be empty, for the reason you mention. but it's not too weird, you can always pay more in fees that you are required.
That's true, but I thought that this was too much toward the "ledger rule" side of the spectrum to be considered here. it requires know the current protocol parameters, you can't just look at the value to make the determination.
true.
If you replace SHA-256 with blake2b 224 (28 bytes), then yes.
non-negative, yes. Pointer addresses are nearly un-used, and will almost certainly be deprecated soon.
yep
unfortunately not, which is arguably a bug. we currently allow you to de-register a stake credential, re-register it, then deregister it again, all in the same transaction. which leads to duplicates. maybe we will disallow this in a future era.
yep, most of this comes from Map.
this is guarded by the ledger rules, the transaction must fall between the (slot) interval.
It's a set in the ledger, so correct, no dups and ordered lexicographically. it can be empty.
did you mean txInfoFee? the fee is just lovelace. did you mean an output? maybe you mean that the policy ID must show up in a resolved input or an output?
true!
yes
yes
I'm not sure I follow. The number of redeemers is dictated by the number of plutus script hashes in the transaction. But note that you can't tell from looking at a script hash if it is a plutus script or not, you need the transaction witnesses to figure that out. There's a CIP right now to deal with the awkwardness of this fact. But you'd need exactly one redeemer for each of: plutus input, plutus mint, plutus cert, plutus withdrawal.
The ledger does guarantee that the hash matches, but I assume you want to make a context yourself and know that it is valid. I don't know what the best thing would be to do, you need to cache the serialization somehow.
The only "supplementary" datums that are allowed are those that correspond to newly created output, and those that are in an output corresponding (in the UTxO) to a reference input used in the transaction.
it's a map in the ledger, so yes it is sorted and no dups are not allowed (remember, the ledger caches the serialization of the datums and would never reserialize).
I think once our list settles down, we could go through each one and answer these questions. It's a fun exercise, but it definitely requires me to look at everything from a perspective that I am not myself used to.
Most of the things we have discussed can be derived from the formal spec and the wire spec. But, as I mention above, the perspective here is very different, and so things are not stated in the way that is helpful for you and the other folks looking for the possible space of values. Exacerbating the problem is the fact that the ledger spec is written operationally, and the denotational semantics can sometimes be obscured (though it makes it much easier to compare the spec with the code).
The ledger team has never yet had the breathing room to build out a proper, user-friendly API. our biggest "consumer" is the consensus layer, and the bulk of our effort has gone to that. But I realize that many folks have struggled to use the ledger code as an API. I think your suggestions are good, and would be a part of the API. The cardano-api (in the node repository) does have an API for some of the ledger functionality, but it also has not been able yet to spend the time add in all the features that are really needed. I guess I (probably naively) thought that most of these issues that we've talked about were not really ledger issues, but issues for the Plutus tooling. My biggest take-away from your last post is: the ledger has more restrictive types that those in the context, and this information is lost during the conversion. I understand why you would want all this validation, since you want to make arbitrary contexts so that you can test Plutus scripts. There's another conversation about what is and is not a good idea to assume inside the context of a Plutus script. This came up in the CIP that you mentioned, about using lists instead of sets for inputs, but all the same concerns apply here. |
Thanks again Jared! I'll try to keep this response shorter to not take up too much more of your time. Can you re-confirm this statement for me?
The haddocks say SHA256. Is this incorrect? If so, I'll open a ticket on that repo.
of course, your comments make sense. But I think my point was more that we know some ADA has to be present.
As above, going down and unwrapping the types leads to a validator hash being listed as a SHA256. Can you confirm that this is incorrect? If so, I'll open a ticket on that repo.
Sorry, no -- I meant
I think my point was that if we had
I.e., a
Agreed with everything you said. I think the line gets blurred between ledger vs. tooling, because when the official tooling is inadequate and can't be relied on as a ground-truth, we have to turn back to the ledger. And, as you've mentioned, it seems like the ledger team has not only a different set of priorities due to different consumers, but also a different set of types they're working with (sets vs lists, etc.) But yes, I think we've now got a solid understanding of what each other are working with. My mind is currently wondering whether this repo was the wrong place for this, though -- this was where I was pointed to when I asked the question "what is phase-1 validation", but it seems like most of my issues are actually more related to the In your opinion:
|
Conversations like this are my favorite part of my job. And improving life for Plutus developers is arguably the most important thing we can do for Cardano right now. So don't slow down on my account! :)
Absolutely. The only hash that we use in the cardano ledger is blake2b. The link that you shared points to incorrect documentation, thank you very much for letting us know (opening an issue would be grand, if you don't I will). Note that the Shelley ledger spec specifies the hashes in appendix A.1 (sometimes we use 224, and sometimes we use 256).
ah, yes, it is safe to assume it is not zero. (unless you are on some testnet where the protocol parameter dictating the cost was set to zero).
ah yes, that is another good check. relatedly, you could do the whole "preservation of ADA" check as well, if you are okay with the validation function taking the protocol parameters that determine the deposits amounts (stake cert registration and stake pool registration). This is the "consumed == produced" check in the specs.
Yes, that is another good check. The trouble, though, will be determining which script hashes correspond to Plutus scripts (in the absence of have the scripts in the context). And there is one gotcha to be aware of, though it's a rather odd case. If there is a repeated certificate inside a transaction (such as the situation I mentioned earlier, where someone de-registers, re-registers, and de-registers again), only the first such certificate needs the redeemer.
I'm very appreciative of seeing y'all's perspective, I'm glad you raised this here so that I was able to gain insight. It's definitely also
Thank you for the offer! I think we are definitely honing in on a helper validation function that someone could write without too much effort. After we've settled on what it should look like (maybe we're there now), I can make a clear list of what to check. I can also check with the Plutus team and see if they have an opinion on where this should live. I love the idea of gaining new contributors, though, so if you want to do it to that would be fun too.
indeed, we can only do that with a new plutus language version. |
Opened the issue, thanks for confirming. But yes, in terms of the helper validation function, the dream would be having a suite of such functions that can cover all of the different bits. I mentioned this above briefly, but the reason why we'd want a suite rather than (only) one big function is because the checks we're after are broadly split into:
When our testing utilities are building a So something like (psuedo-code) TxInfo {
txInfoMint = [("ABCD", "FOO", 100)],
txInfoInputs = [],
txInfoOutputs = [],
(...)
} would be rejected, since we might not want to bother fully specifying the entire This extends to your comment about protocol parameters and ledger state, as well. Sometimes we might want to pass those explicitly, sometimes we might want to rely on sensible defaults, or sometimes we might want to turn off all checks that can't be conclusively determined without the extra info. In terms of our own contributions, I'd have to check with the client and see where priorities lie. I think there's a strong case to be made that many of our testing functions could be made much better with these sorts of checks -- 10000 quickcheck cases don't pack as big of a punch if half of them are nonsense! :) Let me discuss with my team and get back to you; but collaboration with the plutus team would be good, having it totally handled for us would be great! Thanks again, Jared |
This is a very long thread, I'm going to attempt to try to respond to it below, please correct me if I've misrepresented anyone. I think Peter's primary desires are:
I think these are quite different desires so I'll address them separately. In order to tell people what
I would definitely be in favour of doing more of 1 and 2. However, I think we can't cover everything. The ledger spec is big, and almost every aspect of it bears on what you can see in a I don't really know what to do here. I think many of the specific questions asked by Peter are low-enough-hanging fruit that it's worth including the answers in more places, but ultimately if you want to know the details of how some corner of transaction validation works, you're going to have to go to the spec. I don't really know what to do about testing. I think the approach of generating
i.e. test via submitting transactions that actually look like the transactions your application will submit. This is unsatisfying in a number of ways. It would be nice to be able to test the validator script more in isolation, and it would be nice to be able to test variants and malicious interactions. But I don't think anyone has an easy answer for that today, although people are thinking about it. To be honest, I am not optimistic about the idea of generating valid Finally, a special note on The right answer here IMO is to write your operations at the correct level of abstraction. The It's unfortunate that tools such as Plutarch force people to break the abstraction and operate on the representation. Ironically, this is a case where we did try to present people with types that enforce the correct usage. |
Another thing: I think it is a real awkward point that this sits between the ledger and plutus. It really is the case that if you want to write a dapp, you need to know quite a lot about the ledger, more than any other kind of user, and the only way to get that information is by reading the spec. I have been arguing for some time that I would like to have some kind of medium-detail ledger conceptual documentation that at least covered some of the concepts, even if it didn't go into the full detail that you need the spec for. |
indeed! @michaelpj knows I am also very fond of this idea. (You can find in page "dreamed of" in this long conversation to see me expressing the same thing). |
Thanks for your response @michaelpj; I'm very appreciative that you've obviously taken the time to read through the extended discussion between Jared and I. I think that your first three points are spot on, and I would say that they're basically a spectrum from "a perfect world" to "what we'll suffer through if we must" from the perspective of developers. One of the things that drew me to Haskell, which I am admittedly more new to than most of my colleagues, is that illegal states can be made un-representable. Doing so at the type level is the holy grail, because then you know a total function is only operating on exactly what it needs to. You don't need any Newtype with smart constructors can help at the value level, but then you're still littering the code with Reading the spec, IMO, should be probably something that every Cardano dApp developer should do once, but its hard to find the time when we're struggling against so many other things. But its very difficult to keep all of that in our heads -- and that's another reason I was drawn to Haskell. I don't have a great memory, and having types around -- expressive, precise types -- meant that I no longer had to worry about exactly whether I needed to check that my strings were a certain length before passing them to a certain function. I totally agree that there are some things -- particularly the interdependencies in the Your comment
I wouldn't say Plutarch does anything in an untyped way at all. Plutarch carries around information in the type level for values now to indicate whether they're sorted, positive, non-zero, or neither, and this makes things much easier to work with than the last time I used And regardless, the abstractions in The issue that I have with your comments regarding generating script contexts and I don't have actual measurements, but I am positive that applying parameters to a plutus function and evaluating it will be much faster than building an entire transaction just to test one helper function. And even more so, we don't want to generate fully valid transactions. We want to generate just the parts of the TxInfo that we care about. We want to generate And just to be clear -- testing Validator scripts and minting policies is a very small percentage of what we want to do. It's important, for sure, but testing the utility functions are more critical. Its very difficult to compose a program with confidence when all of your requisite parts are totally untested. And for your final comment: I don't disagree that dApp development teams need SMEs on the spec, but it is a waste of time for every developer to need to have a copy of the spec sitting on their desk at all times just to write a legal inhabitant of a type. We're putting in more and more time working around this and demonstrating that it is possible to represent a large swath of this low-hanging fruit computationally. Of course, the spec will always be the source of truth. A conceptual document will help, but its not as useful as wrapping that knowledge up and expressing in computationally. |
For testing, I can recommend looking into https://github.com/mlabs-haskell/plutus-simple-model, it uses |
There's a very good reason for it: Substantial efficiency improvements. |
EDIT: damn, accidentally posted before I was done, some more added afterwards. I think it would be helpful to try and extract from this thread a list of specific places where we can adopt strategy 1 or 2.
I mean, you only need to remember about the pieces you actually work with. A developer who has 0 domain knowledge can't realistically do anything with transactions. And you don't need much to get the basics down. Inputs, outputs, values, none of this is that weird or needs you to read the spec. But also it is simply true that full Cardano transactions are pretty complicated, and if you are in the game of making Cardano transactions you can't really avoid that, although it depends how much of the complexity you use.
I think this is perhaps more promising angle. The local conditions on specific fields are more likely to be documentable or enforceable. The non-local conditions on the transaction as a whole would be quite difficult to replicate without duplicating the entirety of the ledger rules. Examples:
To be clear, there are two sources of pain in this duplication of the ledger logic:
I think 2 is the worse problem over time, and is why I'm very reluctant to get into this.
I guess we thought people would use that power responsibly. Perhaps it just needs a bigger warning. Also on the subject of generation... I think a collection of I'm not up-to-date on the latest in Plutarch.
My question is: why do you care about such things? My guess is: because you're operating on the underlying representation, and you want to write code that is faster by making use of those properties, rather than writing the more generic code that would work regardless. I am fundamentally torn here. I think it's better software engineering to not write code that's so exposed to the tiny implementation details, but also I understand that performance is a key concern. Also... those are some pretty ferocious types in Plutarch. For better or worse, we have tried to keep the types relatively simple. As ever in Haskell, that's a judgement call.
I'm not saying there's no no reason for it, I'm just pointing out that you're breaking the abstraction, and that has consequences. |
Overview
A perennial issue on a team I manage has been understanding what exactly comprises phase-1 validation. I would like to see a clear, plain English listing of the criteria that must be satisfied by a script context or transaction in order to actually function on the main net.
Full disclaimer: I've not fully read the specs or the implementation myself and this issue is, in part, to request a resource to which I can direct developers to in lieu of a code-dive or spec-read.
Current Understanding
The consensus among my coworkers is that:
Tx
type rather than aTxInfo
, meaning there's a lot of extra things that would be unsuitable for simply validating aTxInfo
orScriptContext
.Our use case
The reason for why we need this is to be able to construct script contexts in Haskell and know whether or not they are realistic to what we would see in practice. My understanding is that tools like
EmulatorTrace
have failed to perform this sufficiently -- failing to completely normalizeValue
s, for example. This leads to situations where developers are unsure whether they need to check for phase-1 validity in phase-2.We would, in particular, like to be able to have a set of checks like
isValueNormalized
,is transactionBalanced
,areOutputsSorted
, and so forth, that we can combine into something that corresponds toisPhase1Valid
. The final property should be thatisPhase1Valid
is true if and only if a given transaction/script context canWe'd be happy to write these if needed, but they would ideally be provided for us. It's insufficient to fully serialize a transaction using the various utilities available, because we want to be able to define partially-invalid script contexts at certain stages in the development process, but also understand which validation checks would fail on a given object.
Finally, we'd want to partially auto-generate script contexts, possibly exhaustively within certain bounds, so we'd want these checks to be fast. Full serialization would ostensibly be much slower than a pure
isPhase1Valid
function, and partial validation would certainly be faster than either if we only cared about a subset of checks.Ideal Outcome
EmulatorTrace
and other IO utilities.The text was updated successfully, but these errors were encountered: