Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created a persistently amortized version of Data.Vector.Map #2

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

bacchanalia
Copy link

Your talk at Mozilla inspired me to read Okasaki's Purely Function Data Structures, and then I decided to try to apply it to improve your COLA implementation under persistent usage.

@ekmett
Copy link
Owner

ekmett commented Nov 29, 2013

Take a look at the overmars branch. I just realized I hadn't pushed it til now. I'll go through your code and compare.

@bacchanalia
Copy link
Author

I took a look at the overmars branch. I don't see why it the bounds are
worst case instead of amortized. It looks like you set up an incremental
merge, but don't actually pay for it incrementally. Work in progress?

I've been trying to observe the worst case behavior in both our
versions. For mine it should be going between (2^n-n) and (2^n-2) and
for yours it should be between (3_2^n-3) and (3_2^n-2), but I haven't
been able to write code to noticeably demonstrate it so far.

As a side node, I noticed your version takes a huge hit (50-80x) in my
persistentLookup benchmark.

On 11/29/2013 06:01 PM, Edward Kmett wrote:

Take a look at the |overmars| branch. I just realized I hadn't pushed
it til now. I'll go through your code and compare.


Reply to this email directly or view it on GitHub
#2 (comment).

@ekmett
Copy link
Owner

ekmett commented Dec 1, 2013

i haven't fixed the performance of it yet, just set up the right recurrences. :)

The deamortization follows the use of my deamortized ST post on FP complete. I have log n merges happening slowly one step at a time, but i never do more than log n steps of work for each insert.

it is currently taking a -massive- speed hit because it needlessly uses the fusion machinery.

It probably also make sense to switch to a leas rigid scheme that permits me to do the merges log n at a time on the same merge rather than do one step each glacially across log n merges.

On Nov 30, 2013, at 7:57 PM, Michael Zuser notifications@github.com wrote:

I took a look at the overmars branch. I don't see why it the bounds are
worst case instead of amortized. It looks like you set up an incremental
merge, but don't actually pay for it incrementally. Work in progress?

I've been trying to observe the worst case behavior in both our
versions. For mine it should be going between (2^n-n) and (2^n-2) and
for yours it should be between (3_2^n-3) and (3_2^n-2), but I haven't
been able to write code to noticeably demonstrate it so far.

As a side node, I noticed your version takes a huge hit (50-80x) in my
persistentLookup benchmark.

On 11/29/2013 06:01 PM, Edward Kmett wrote:

Take a look at the |overmars| branch. I just realized I hadn't pushed
it til now. I'll go through your code and compare.


Reply to this email directly or view it on GitHub
#2 (comment).


Reply to this email directly or view it on GitHub.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants