From c2b329fb74bb759a52d7c47668f327bb8f387b6e Mon Sep 17 00:00:00 2001 From: sg777 <8114482+sg777@users.noreply.github.com> Date: Wed, 7 Feb 2024 19:41:36 +0530 Subject: [PATCH] Update verus_migration.md --- docs/verus_migration/verus_migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/verus_migration/verus_migration.md b/docs/verus_migration/verus_migration.md index ab18acf2..dd8ddeac 100644 --- a/docs/verus_migration/verus_migration.md +++ b/docs/verus_migration/verus_migration.md @@ -16,7 +16,7 @@ So with that in mind its important to notedown the following characteristics of 1. When multiple partities trying to update either the same key or multiple keys of the ID at the same time(or you can say in the same block), then only the update tx that network sees first gets a chance and others simply gets rejected. 2. For concurrent updates, one possible approach is always first check if there is any spent tx in mempool with regard to that ID, and make a spend tx on top of that mempool tx then in which case the multiple updates to the ID happens in the same block.`[Note: How this to be achieved is yet to test].` 3. As we know that we can store multiple key value sets in contentmultimap, but while designing itself to the possible extent always make sure that if any data that needs concurrent updates handle with multiple ID's, subID's. Many applications can't afford the waittime of one blocktime for every update to ID to happen. -4. `getidentity`only returns the data that is updated with latest UTXO. In order to get the data that is updated on an ID over a period(end block height - start blockheight) of time we need to use `getidentitycontent`. +4. `getidentity`only returns the data that is updated with latest UTXO. In order to get the data that is updated on an ID over a period(end block height - start blockheight) of time we need to use [`getidentitycontent`](./getidentitycontent.md). In bet we are representing the data in JSON format and exchanging it over sockets, unlike socket communications with vdxf IDs we need to be cautious about space like how we storing the data on chain and what data we are storing on the chain. To minimize the storage space on blockchain we encode the data as compact strucutres. At the moment we using structures to encode the data for few APIs and eventually once we get clear idea on contents of each update and what exactly an ID can hold then we can map the ID data to a structure and encode in hex and store.