-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[move-lang] Table extension. #150
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick first pass, but I like it!
|
||
/// Insert the pair (`key`, `val`) to `table`. | ||
/// Aborts if there is already an entry for `key`. | ||
native public fun insert<K, V>(table: &mut Table<K, V>, key: K, val: V); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is the right design for insert. Many libraries return back the value if it exists. In smart contract development, I found that there's likely different behavior if there are two values at the same key, e.g. merge. If we return the value in move, we then need to deal with dropping option none or other nonsense in the common case, which isn't ergonomic.
This PR is now ready and tested, thanks to an exhaustive test suite for tables @junkil-park created for the EVM project. |
/// Type of large-scale storage tables. | ||
module Extensions::Table { | ||
/// Type of tables | ||
struct Table<phantom K, phantom V> has store { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I saw this, I initially had some concern that we might need K: copy
to preserve resource safety. After reading through the current implementation, I am convinced that it is safe. However:
-
I am concerned that someone might add an API that returns a
K
in the future (e.g.,remove_entry<K, V>(table: &mut Table<K,V>, k: &K): (K, V)
without realizing that this compromises resource safety. -
I am concerned that this might appear to violate resource safety from the perspective of an outside observer. For example: if you do
add_nft_to_table(nft: &NFT, v: u64, table: &mut Table<NFT, u64>) {
Table::insert(nft, v)
}
an an outside entity (e.g., a block explorer) displays the keys of table
as structured types, NFT
(and its unique ID) will appear to live in two places at once: in the account of the owner, and in the domain of the table. That is potentially confusing/alarming.
This feels somewhat similar in flavor to Eth attacks like sleepminting that aren't smart contract exploits, but can be used to scam clients. The equivalent attack for tables might be something like fooling a client into thinking you sent them an NFT because it's a key of the table the client controls.
The easy way to address both of these would be to require K: copy
. Does that eliminate any of the use-cases Meta/Aptos care about, and are the use-cases important enough that we're willing to risk (2) (less worried about (1), which can be mitigated through thorough comments + testing)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As regards (1) if K has no copy, would it even be possible to call this API? It really depends on how NFTs are created. If they are truly unique than also two keys which are equal cannot exist at the same time, hence you have no way to call this API (either the NFT is in the table or not). (Clearly, in this case we also need to have insert(k:K,..) instead of insert(k:&K,..). I understand that any native API can break this via serialization but seems to be in the nature of such APIs and we need to carefully review them before we let them in.)
Regards (2), I'm not sure whether I fully follow, but a resource explorer for Move should not display the keys, or if so, should display them clearly marked as references. Would that solve the issue?
I feel a bit reluctant at this time to restrict the keys, as it seems to cause no technical problem to even have tables in keys, And the Move Prover has no issue with such general keys as long as there is no way to discover them from the table. Also, no-copy NFTs as keys seems to be a use case we need, perhaps @junkil-park can say more about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regards (2), I'm not sure whether I fully follow, but a resource explorer for Move should not display the keys, or if so, should display them clearly marked as references. Would that solve the issue?
This would solve the issue, but it raises special cases that every client (explorers, wallets, marketplaces, ...) would not have to worry about if not for tables. E.g., "mark it as a reference" isn't something that clients currently have to handle, since on-chain structs can't hold references. To give another example, you can imagine a client implementing an API like get_all_types<T>(a: address) -> Vec<T>
that returns all top-level or nested values of type T
in a
. If the implementation doesn't know to special-case table keys, it will return two copies of NFT
in code like the example above.
As a language designer who now spends a fair amount of time on clients and the issues Move creates for them, I am coming to appreciate the importance of minimizing client complexity more, particularly because every client has to implement and understand special cases we introduce. I personally weigh this complexity reduction higher than supporting use-cases that need non-copy
keys (particularly if we don't have any in mind), but I am only one opinion here.
As regards (1) if K has no copy, would it even be possible to call this API?
Good point. This particular API is indeed impossible to call for a copy
type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could sha256 all keys before we apply them. That is what the EVM does. This basically obfuscates them. I do not know how explorers for EVM chains deal with this but in fact I can't imagine that EVM storage can be meaningfully explored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would also work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry but I can't follow the line of argument. Sam's argument was rather different, and the discussion seems to be resolved. Your's and Yolestars main argument I can see is that you don't like it because it is not the same as in Rust. I don't agree that this -- if I understand you right -- would be a strong reason. We are in many ways different than Rust in the current stdlib APIS (vectors, options, etc.), also because we have (not) drop
which Rust doesn't.
If we can't agree on this API I guess we need to wait until I'm back from my trip to settle this. We at Meta aren't blocked by this PR, I do not know about Aptos though (@davidiw ).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your's and Jolestar's main argument I can see is that you don't like it because it is not the same as in Rust.
I don't think thats the argument either of us are making. It at least isn't mine. I mentioned Rust briefly only in that they also have a similar linear type system.
All of my points are really about how APIs work in Move. Maybe that will help reframe what I have said?
In essence, I, as a Move developer, wouldn't expect any collection API to behave like this. (And our Map APIs almost certainly won't). I think the disparity will be confusing and force users to think about Tables very differently, which may or may not be a good thing.
Going a bit further just on the storage side of that API, all of our existing interactions with "storage" are over owned values. You can modify by reference, but there is no insertion/removal by reference. I think it might feel confusing that the keys for tables don't behave like other storage values, even though they actually appear in storage. So what I was trying to argue is it feels more straightforward to me to have tables operate over owned keys (with copy + drop
for API simplicity), and then have wrapper modules if you want to do things like Sha256 of "resource" keys by reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A table is not a regular collection, that's why we call it table and not "map". A table does not store any keys. There is no way how you can retrieve keys from a table. Therefore moving or copying keys into a table on insert is logically inconsistent. In Move it is idiomatic to pass a reference if you do not transfer ownership; that is what is happening on insert.
If you move keys into a table on insert, the next question would be: why can't you can get them back from the table? The API would be incomplete and illogical. If you make now the keys copyable, you just hide that inconsistency. Now a copy of the key is conceptually stored in the table, but you still can't get it back.
The way how it is right now is really the most idiomatic way for Move I can imagine, given all the design constraints we already agree upon. I believe we still need to introduce real 'maps' which can be iterated and are not large scale, but this is a different discussion,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a developer hacking on a pseudo-table, I definitely state that an owned key or making it copyable is irrelevant unless you want to support enumeration of the table. Considering that is a non-goal, the user of a table would need to concoct some other method to keep track of keys. I mean if a key cannot be generated or enumerated, how on earth would a user retrieve the data from the table? I cannot fathom the value in keeping such a key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely state that an owned key or making it copyable is irrelevant unless you want to support enumeration of the table
copy + drop
doesn't have to do with enumeration, so I'm not sure I see where you are coming from.
I think the concerns @jolestar and I both share are that
- the API feels cohesive with other libraries.
- the API feels logical in conveying intended behavior to a user of the module
Did I express your concerns correctly, @jolestar?
A table does not store any keys. There is no way how you can retrieve keys from a table. Therefore moving or copying keys into a table on insert is logically inconsistent.
I think if we can nail this or make it apparent to the user, that would be great. Maybe even having the reference in the API will help a developer, who is comparing this to Map or other libraries, stop and say "what is going on here? How is this different?" and then that will help them understand what is going on in the Table in general.
ec24474
to
6bba445
Compare
// Take the transaction hash provided by the environment, combine it with the # of tables | ||
// produced so far, sha256 this and select 16 bytes from the result. Given the txn hash | ||
// is unique, this should create a unique and deterministic global id. | ||
let mut digest = Sha3_256::new(); | ||
Digest::update(&mut digest, table_context.txn_hash.to_be_bytes()); | ||
Digest::update(&mut digest, table_data.new_tables.len().to_be_bytes()); | ||
let bytes: [u8; 16] = digest.finalize()[0..16].try_into().unwrap(); | ||
let id = u128::from_be_bytes(bytes); | ||
assert!(table_data.new_tables.insert(TableHandle(id))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If multiple tables are created for the same transaction, will the generated ids be duplicated (if they all have the same length)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The adapter needs to provide a unique hash for the transaction (the txn_hash) and table ids would be unique relative to this hash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, it's tables.len(), not table_data.len()
|
||
#[test_only] | ||
/// Testing only: allows to drop a table even if it is not empty. | ||
public fun drop_unchecked<K, V>(table: Table<K, V>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this useful? We will leak the table items anyway, right? Or do you expect the adaptor to recursively drop (which might not be possible)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it will be useful for testing so you can run the code without Move complaining about not consuming the table handle.
If I follow correctly, this native function is a no-op and destroy_empty_box
is where the real logic is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used in the current testsuite, as Todd describes.
(I see the thread was resolved, so reposting) Supporting length turns some insertions (those creating new entries) and removals writing an additional metadata node in the storage. Do we consider a version of map that doesn't know its length and not safely droppable (and makes its embodying structure not droppable as well)? Because the deletion refund won't matter much, I suspect there won't be anybody who bothers to clean up tables anyway, especially the empty table itself. And the length-aware version can be implemented on top of that. |
If we drop length, we still need the ability to check for emptiness to implement |
I mean, in that (default) flavor of table, |
Tables which have different gas models may be confusing for users. Why can't we have one which has the right properties (including refund)? If this is not supported by a given platform now, refund might be ignored for now. |
FYI I'm leaving town tomorrow afternoon for ETAPS conference and will likely not be able to work on this PR again before being back 11/4. If we want to land this and possibly iterate on it, we need to do it in the next 24h. |
struct Box<V> has key, drop, store { | ||
val: V | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(not a blocker or anything, just trying to follow whats going on, so you can delay answering this)
Sorry for not grokkin this, but could you quickly point to where this makes the implementation easier?
I want to understand if there is anything we can do to improve the API on the native side of things in the VM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need GlobalValue, and it currently has an invariant that the value inside needs to be a Struct.
/// Type of large-scale storage tables. | ||
module Extensions::Table { | ||
/// Type of tables | ||
struct Table<phantom K, phantom V> has store { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intention of this API is to not let the developers reason about how keys are implemented. For example, whether leys are actually implemented by BCS or by some sha256 (as discussed earlier in this thread) is something this API intentionally hides from the user, thus leaving implementations the choice how to do this.
I personally wouldn't get that as a developer. This sort of API would do the opposite and force me to think about how keys are implemented, since they are doing something unusual for Move, and taking the key by reference. If it was an owned key value pair, I think it is easy to say "oh, it is putting the key value pair in the map, easy". But with this reference, it raises questions that would force one to think about whats going on under the hood "how is it using the value if it isn't owned? Is it actually storing the reference? Why can't I do that in my own structs?"
Given @jolestar's and @sblackshear's similar concern. Maybe it would be easier to stick to K: copy + drop
for this initial PR?
You can still do a Sha256 wrapper, or any other wrapper you want (as a separate module I now realize). We can then explore all of these separate wrappers as we learn more about how developers will use Tables. In short, starting with K: copy + drop
might be better for initial exploration
The refund didn't work for Ethereum (people don't bother to delete, except GasCoin), and they intend to remove it My thinking is to avoid arbitraging on gas, the refund must be cheaper than creation, and modern blockchains want the creation to be cheap -- as a result, no one will bother deleting. So what I really tried to say is, I only want to not-destroyable version. And rent / state expiry is the right answer for the state explosion problem, not intentional deletion. It's a separate discussion, I agree we shouldn't block on that today. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to get this diff landed before Wolfgang goes on leave tomorrow. I think this is reasonable given that the work is exploratory and not law. The code as is written is agreeable to our requirements from an interface perspective. There hasn't been claims of breaking those requirements. It would give us a chance to actually evaluate the ergonomics and provide meaningful feedback.
Are there any objections to landing this and continuing the conversation after one or more parties begin to build upon this?
If there aren't any objections, I'll swing by tomorrow at 9 AM PST.
/// Type of large-scale storage tables. | ||
module Extensions::Table { | ||
/// Type of tables | ||
struct Table<phantom K, phantom V> has store { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a developer hacking on a pseudo-table, I definitely state that an owned key or making it copyable is irrelevant unless you want to support enumeration of the table. Considering that is a non-goal, the user of a table would need to concoct some other method to keep track of keys. I mean if a key cannot be generated or enumerated, how on earth would a user retrieve the data from the table? I cannot fathom the value in keeping such a key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @wrwg
/land |
This adds tables as an extension to Move, using the new VM extension mechanism. Tables (and other future extensions) are suggested to live in the `language/extensions` tree. They come with a Move library and a Rust implementation of the table extension data, including changeset representation, and a native function table. The CLI and unit testing framework has been extended to support extensions. This is controlled by a compile-time feature flag `table-extension`. The mechanism is generic enough to also support future extensions. A fairly complete test suite has been copied over from the EVM project. However, while functionality should be covered, Move unit tests do not support testing of change set generation and remote table resolver access, so this aspect stays untested. For this integration tests are needed. Closes: diem#150
/land |
@wrwg 💡 This PR is already queued for landing |
I prefer
For a no copy example: module MyModule{
struct Foo {
f1: u64,
}
public fun table_test(){
let table<Foo,u64> = //get a table;
let foo = // get a Foo instance
table.insert(&foo,1);
//update the key
foo.f1 = 2;
//how the developer think about this?
let v = table.borrow(&foo);
}
} For a copy example: module MyModule{
struct Foo has copy,drop{
f1: u64,
}
public fun table_test(){
let table<Foo,u64> = //get a table;
let foo = // get a Foo instance
table.insert(copy foo,1);
//update the key
foo.f1 = 2;
//It is more easy for developers to understand
//that the value is associated with the key of the copy,
//so v can not get via a modified key.
let v = table.borrow(foo);
}
} |
Thanks Jolestar, this is an Interesting example. However, a developer that would make the mistake in the first example would also have a missing understanding of fundamentals of reference semantics in Move. The value behind a |
Although this code is a bit stupid(😂) and modifying the key is not a good idea in any language, it may happen and the above example code works, especially if the update key operation does not happen in the same method. |
Even though I do think
This design will allow the Move Prover to reason about tables. Because of the opaqueness of This should address @tnowacki and @jolestar's concerns, and also @sblackshear concern (which I think is a different one), plus it also works with the requirements of the prover (@DavidLDill @shazqadeer). I still believe passing |
/// extension. | ||
pub struct NativeTableContext<'a> { | ||
resolver: &'a dyn TableResolver, | ||
txn_hash: u128, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it would be great to have a stronger hash than just 16-bytes. We moved Aptos away from 16-byte addresses due to the weak properties and lack of user ergonomics as a result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is an interesting question. I think we need to consult a cryptography expert here (@kchalkias?). My thought was that while we can hash the known data on the planet in u256, we don't need that precision in this case, because the variety of the possible values is restricted.
If we do 256, we should use the type U256 from the primitive_types crate instead of the direct array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... and to note: it seems common in the Ethereum world for such cases to compute 256 bit hashes (keccak in their world) but then slice them down to 8 or 16 bytes. This is theoretically sound because any bit in the hash has the same randomness. Indeed the collision probability may rise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are major UX benefits with 128bit addresses (many wallet users have noticed that anyway re usability), and the original decision was that unlike Ethereum, by design we restricted sending tx to future addresses (to avoid dealing with offchain attacks). However, global demand for these applications have raised since then, and the selection is actually dependent on whether on each address creation we check our hashset of existing addresses.
So today we need to increase the length, to err on the safe side.
The reality is many cool ideas would benefit i.e., from 160bit addresses (half of world's chains use this size anyway), a) due to compatibility reasons, b) fitting a transactions in an sms, c) improve caching requirements, d) tx network payload size etc.
But, we recently realized that Ambush Attacks affect Ethereum's 160bit addresses too, in some non-updated node libraries. All in all, in 2022 (I would suggest this to other chains too, including to our Ethereum friends) - we should gradually move to 32 byte addresses as @davidiw mentioned.
Attaching Ambush attack presentation from late 2020 Ambush Attacks on 128 and 160 bit addresses - Kostas Chalkias
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanations. Yes I agree for addresses, and aptos already pushed for extending Move addresses to 256 bits.
Here the situation is a bit different. We want create a unique key for an object in storage (like a 'guid'). The way how we do that is take a base hash identifying the transaction and combine with a counter local to that transaction. Currently we use a 128 bit base hash, combine this with the 16 bit counter, hash the result into 256 bits, then truncate the result to 128 bits to obtain the key. The question is whether this should follow the same scrutiny as for addresses, i.e. do all in the 256 bits. May lay man's guts here were telling me that the information we are hashing is not that diverse than arbitrary addresses, but I might get that wrong, and the domain you are hashing over doesn't really matter here. For the record, EVM does something similar like this e.g. in the domain of method selectors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ambush attack I described above probably applies to objects as well, right? If we eventually truncate to 128bit objectIDs, it is theoretically possible, even with 256bits addresses, for one to off-chain compute 2 256bit addresses where objectID1 = Hash(address1, counter1) = objectID2 = Hash(address2, counter2) for some counter1 and counter2. So, if when adding new objects we don't check against HashSetofObjects.exists(ObjectID)
, then one can overwrite the identifier of a previous objectID.
How can this be exploited without a global hashSet check each time? I guess an example would be
handcraft the ambush attack, in such a way where instead of accessing Object1 data, you confuse the validators to (load/read) take into account Object2 data during computations, and somehow drain some account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Collisions on 128-bit hashes happen w/ probability 1/2^{64}. This is a high probability. So, if the price to pay for a collision is just a performance hit, of course that's fine. But if the price is a break in the correctness of the execution (because the colliding key's value is maliciously overwritten) then use 256-bit hashes.
In general, when you are truncating cryptographic hashes, you are playing with fire. For example, truncating hashes by getting the last n/2 bits can be completely insecure (see https://crypto.stackexchange.com/questions/3153/sha-256-vs-any-256-bits-of-sha-512-which-is-more-secure/3155#3155).
While we know that truncating SHA2 hashes is not susceptible to these attacks, if in the future the hash function changes, there could be problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is a fact that there exist applications that rely on pre-image security only (or not at all re address collisions). For instance Flow (Dapper Labs) uses 64bit addresses, because they are assigned by the network (first come first served). In theory you can have a system where each account is being created by a counter of insertion order. That was the original plan for Libra too and the only reason 128bit was selected was to avoid land grabbing. So if you always check if an address already exists at creation and your application logic is not affected by general collisions, but only on pre-image, 2nd preimage (sometimes multi-target preimage too), then many systems tolerate truncated hashes (incl Bitcoin and Eth).
Also note that in Diem there is decoupling between the address and the spending key. The spending key hash (auth_key) is not truncated, only the address id is, and that's why it could even be a counter (assuming we tolerate long range attacks). The address - spending_key decoupling trick is a common pattern lately, for UX friendly addresses, assuming you know how to build this system (the address is nothing more than a DNS service eventually)
So, the table's insert function will be |
I have not committed to this yet, I'm just saying it could be a compromise if there are better arguments for why not Also, if we settle on this, it is essential that the other conditions are met which I listed, which keep Move verifiable and analyzable. |
Unless we're committed to storing the value of Key in the table, this seems to give the false impression. Also what are the performance considerations here? The key may be an arbitrarily large value and a copy may have non-zero gas associated with it. To counter @jolestar's comment, I can see the confused programmer moving a key value into the key and then wondering how do they get their key back. I think the fact that this is a table and not a map implies a different set of expectations. Perhaps the right solution is to expose a Map as well and then let the VM integrators decide if they want to support both or either or. Mind you, a Map can theoretically be stored as a few tables -- Table<&Key, Index>, Table<&Index, Key>, Table<&Key, Value> -- then you get the ability to count, enumerate, and get keys back. |
This adds tables as an extension to Move, using the new VM extension mechanism.
Tables (and other future extensions) are suggested to live in the
language/extensions
tree. They come with a Move library and a Rust implementation of the table extension data, including changeset representation, and a native function table.The CLI and unit testing framework has been extended to support extensions. This is controlled by a compile-time feature flag
table-extension
. The mechanism is generic enough to also support future extensions.A fairly complete test suite has been copied over from the EVM project. However, while functionality should be covered, Move unit tests do not support testing of change set generation and remote table resolver access, so this aspect stays untested. For this integration tests are needed.
Motivation
Tables
Have you read the Contributing Guidelines on pull requests?
Yes