Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maintain indices of available liquidity for open positions #2378

Closed
Tracked by #1724
zbuc opened this issue Apr 17, 2023 · 5 comments · Fixed by #2843
Closed
Tracked by #1724

Maintain indices of available liquidity for open positions #2378

zbuc opened this issue Apr 17, 2023 · 5 comments · Fixed by #2843
Assignees

Comments

@zbuc
Copy link
Member

zbuc commented Apr 17, 2023

As part of #2332 we implemented functionality to query the chain state for open positions and find routing candidates based on available liquidity (#2377), however the current implementation relies on iterating every open position and bucketing/sorting the available liquidity, a strategy that will likely not be viable in the long-term.

Instead, indices of the available liquidity for other assets should be constructed as part of the logic for manipulating liquidity positions, and queried by candidate_set.

We could then expose this via APIs as well.

@hdevalence
Copy link
Member

xref #2569

@hdevalence
Copy link
Member

hdevalence commented May 16, 2023

Maintaining indices of available liquidity might be expensive, and so we might instead want to use SwapExecutions or BatchSwapOutputData. EDIT: I think this comment is wrong, but we do need some design work here.

@conorsch
Copy link
Contributor

At retro today, we decided this isn't going to land for 55. Demoting to Next.

@zbuc
Copy link
Member Author

zbuc commented Jun 27, 2023

What does it mean to index things by liquidity:

we want a metric that provides a rough proxy of which pairs have volume

in order to compare the scores (available liquidity of routing candidates) of the different neighbors, the scores all have to be in the same unit, i.e. amount of asset A (source asset) you can buy with asset B (the candidate asset)

this is not the direction the trade goes in, however liquidity is likely to be symmetric-ish, and indexing in the other direction would be easily manipulated. in rapidly descending market conditions the assumption of symmetric-ish liquidity is not likely to hold.

the put_position method will need to become async so it can read the existing position and new position to create the changes to the aggregate liquidity indices caused by adding the new position.

we should also add RPCs for things like "here are all trading pairs"/"here are all neighbor trading pairs"

@zbuc
Copy link
Member Author

zbuc commented Jul 14, 2023

After discussion in Discord, we've coalesced around a design involving two different key types.

Goal: Given A, rank assets B by the amount of asset A that can be bought with B.

Key/value types (key => value):

  • A || be_bytes(A_from_B) => B this will be an ordered encoding of every asset B directly routable to from A. a_from_b represents the amount of A that can be bought with B.
  • (A, B) => A_from_B this will encode the current amount of A tradable into B for every directly routable trading pair. This index can be used to determine the key values for the previous ordered index to perform updates efficiently.

When updating a position:

  • query (A,B) index to get current_A_from_B value for the trading pair
  • use the current reserves to compute current_position_contribution
  • use the new reserves to compute new_position_contribution
  • compute new_A_from_B = (current_A_from_B - current_position_contribution) + new_position_contribution
  • delete existing key A || be_bytes(current_A_from_B)
  • write new key A || be_bytes(new_A_from_B) with value of asset ID B
  • write new value new_A_from_B for key (A,B) in secondary index

Finding the candidate set:

  • range the first index for the prefix A, this will stream values of asset IDs B in order of A purchasable with B
  • limit the range query at some threshold

Notes:

  • This multi-index approach will lead to a bunch of tombstoned values in the cache because the cache cannot determine whether keys are newly added in the topmost layer (in which case they could be deleted completely) or if the deletion is for a key which was committed in an earlier cache layer (in which case we do need a tombstone value). If this ends up being a problem in benchmarks we can optimize later.
  • By streaming the asset IDs returned in queries to the first index, we can optimize path search to begin processing the fixed candidate set prior to the dynamic candidate set being returned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Testnet 57: Ganymede
Development

Successfully merging a pull request may close this issue.

4 participants