Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Sum of Multiples] Add Approaches #3375

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

MatthijsBlom
Copy link
Contributor

No description provided.

@@ -0,0 +1,3 @@
def sum_of_multiples(limit, factors):
is_multiple = lambda n: any([n % f == 0 for f in factors if f != 0])
return sum(filter(is_multiple, range(limit)))
Copy link
Member

@BethanyG BethanyG Apr 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is problematic, as it binds a name to a lambda, so needs re-work.

I think this is also referenced in the introduction. We'll need to remove all of them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

…I now see I also left a temporarily inserted list comprehension in there.

Would you rather have

def sum_of_multiples(limit, factors):
    return sum(filter(
        lambda n: any(n % f == 0 for f in factors if f != 0),
        range(limit)
    ))

and keep the explanation of lambda, or

def sum_of_multiples(limit, factors):
    def is_multiple (n):
        return any(n % f == 0 for f in factors if f != 0)

    return sum(filter(is_multiple, range(limit)))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either work for me. The first might be preferable, considering that you've already done a good explanation of lambda. The second one would need an explanation of nested functions. Absolutely not opposed to doing that - but it is extra work for you.

lambda can sometimes be slower in filter or map, since it opens another stack frame. But that's sorta irrelevant to this particular exercise, methinks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lambda can sometimes be slower in filter or map, since it opens another stack frame.

Can you elaborate or link to a source on this? I had no idea.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TL;DR is that lambda has all the overhead/execution of any other function call (here's a link to python tutor for the two examples above), so in any situation where you are using a lambda over a built-in, or in place of a generator expression that does the same/similar operation, you will incur the 'extra' overhead of the function call. Trivial in small/medium cases -- but it can add up for larger data sets.

And where you're converting that filter + lambda into a list or other 'realized' structure, it will be a lot slower than the corresponding comprehension or generator, due to the overhead of calling an additional function for every item in the list.

But this varies widely (since python 3.x returns iterators instead of lists) - if you don't need to realize the values and can consume them lazily (like in a call to sum()), then filter/map/reduce outperform comprehensions, and are mostly even for generators. But again, a generator that doesn't call an extra function will be faster than filter + lambda (because of the lambda function call).

Here are some articles - but many of them assume that values need to be realized, and so aren't really comparing apples to apples. The finxter blog (apologies for the aggressive ads there!) does do the comparisons for both realized and un-realized data - and you can really see the difference.

And I'll provide this for completeness, although its really mostly a rant about personal preference with readability, and not on performance: Trey Hunner: Stop Writing Lambda Expressions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I haven't read all the links yet, but the gist certainly makes sense. I thought previously you meant that

def f(x): return f_body(x)

map(f, xs)
# be faster than
map(lambda x: f_body(x), xs)

That in-lined functions in comprehensions are faster than mapped functions I already expected.

# LIGHTNING
```

An iterator is a bit like a cursor that can move only to the right.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this explanation. Both from a cursor for typing ... and a cursor from a DB. Both can only be gone through once, and you can't back up. 😄

@MatthijsBlom
Copy link
Contributor Author

U-oh, yet another approach:

from heapq import heapify, heapreplace
from itertools import takewhile


def sum_of_multiples(limit, factors):
    def multiples():
        queue = [(_, _) for _ in factors if _ != 0]
        heapify(queue)
        previous_multiple = 0
        while queue:
            multiple, factor = queue[0]
            if multiple != previous_multiple:
                yield multiple
                previous_multiple = multiple
            heapreplace(queue, (multiple + factor, factor))

    return sum(takewhile(lambda n: n < limit, multiples()))

@BethanyG
Copy link
Member

BethanyG commented Apr 11, 2023

WOOT! I was going to mention that one when you were posting in the forum, and then got distracted and wandered away. Glad you remembered it! 😄

@MatthijsBlom
Copy link
Contributor Author

MatthijsBlom commented Apr 11, 2023

Yesterday or this morning I remembered Eppstein's/Martelli's primes generator (improvements on StackOverflow); the above solution immediately followed.

Yet another approach:

from itertools import combinations
from math import lcm


def sum_of_multiples(limit, factors):
    factors = [_ for _ in factors if _ != 0]

    def range_sum(d):
        # `sum(range(0, limit, d))` but in constant time
        q = (limit - 1) // d
        return d * q * (q + 1) // 2

    return sum(
        # inclusion/exclusion
        (-1) ** (r - 1) * range_sum(lcm(*fs))
        for r, _ in enumerate(factors, start=1)
        for fs in combinations(factors, r)
    )

@MatthijsBlom
Copy link
Contributor Author

MatthijsBlom commented Apr 12, 2023

@BethanyG Do you still wish for these lambda assignments to be avoided?

My own preference is to revert most of 5dbe577. (I'd like to keep the admonitions.) I could add some more prose on the matter, or lift the deleted paragraph into a exercism/caution, or something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants