Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor: Use filter and map instead of reduce #948

Closed
wants to merge 1 commit into from

Conversation

dmsnell
Copy link
Member

@dmsnell dmsnell commented May 30, 2017

Just curious on the choice of the reduce. Seems like a filter and map
are more appropriate when joined with a space.

@aduth any particular reason for the reduce approach? seemed to stick out when I saw it, mainly because of the extra it leaves at the end.

granted, the reduce only has a single iteration while this has two, but I assumed you hadn't chosen what you did for speed reasons

@dmsnell dmsnell added the [Feature] Parsing Related to efforts to improving the parsing of a string of data and converting it into a different f label May 30, 2017
@dmsnell dmsnell requested a review from aduth May 30, 2017 22:00
@dmsnell dmsnell force-pushed the tiny-refactory/question-reduce-filter-map branch 3 times, most recently from 103f5f8 to d7406a4 Compare May 30, 2017 22:26
@aduth
Copy link
Member

aduth commented May 30, 2017

granted, the reduce only has a single iteration while this has two, but I assumed you hadn't chosen what you did for speed reasons

It was a consideration, sure.

@dmsnell dmsnell force-pushed the tiny-refactory/question-reduce-filter-map branch from d7406a4 to 4020934 Compare May 30, 2017 22:40
@dmsnell
Copy link
Member Author

dmsnell commented May 30, 2017

@aduth I being it up because it seems like we're exposing a loss of control by eagerly converting to a string and adding the accumulator whereas if we keep the transformation stages separate and then combine them it's clearer to read and intercept. it eliminates the off-by-one error.

put another way, instead of coupling the side-effects (conversion to a string) along the course of the process, push them to the end

thoughts?

Just curious on the choice of the reduce. Seems like a filter and map
are more appropriate when joined with a space.
@dmsnell dmsnell force-pushed the tiny-refactory/question-reduce-filter-map branch from 4020934 to 7b43ce8 Compare May 30, 2017 22:45
@BE-Webdesign
Copy link
Contributor

BE-Webdesign commented May 30, 2017

I think using reduce instead of chained operators is usually a good thing. The performance benefit is often negligible, but reduce has other benefits. Rather than refactoring out the reduce, you could create a composed function via flow() to break things up, like you are with the chained calls. That way you are making it more legible within reduce with only one iteration. That is just another stylistic choice and I think what is already there is perfectly fine and legible. The map(), filter() is probably fine as well, but squeezing every ounce of performance, that is reasonable, out of Gutenberg will be a positive.

@dmsnell
Copy link
Member Author

dmsnell commented May 30, 2017

@BE-Webdesign could you elaborate on what you find more beneficial? this PR isn't meant per-se to request a change to the code, but to better understand the design choices being made here.

I'm asking not because I think that map/filter is just better, but because reduce() feels awkward and forced, and that it introduces weirdness that we have to account for elsewhere (like the dangling combiners)

Note: I used to write code similar to this when I was starting to work with array functions. It felt more approachable to me at the time until I started learning more about how the array functions work in harmony and how breaking the collapse into a string at the end can benefit us along the way

@BE-Webdesign
Copy link
Contributor

BE-Webdesign commented May 30, 2017

(like the dangling combiners)

I don't know what dangling combiners are, so if this somehow improves some of the functionality of the parser that is great. All I was trying to point out was that most likely you can achieve the same result and benefits using function composition (via flow) and reduce. Like I said above though, I don't know enough about parsing really to offer a good enough judgement, so I will defer to someone else 😄 .

@dmsnell
Copy link
Member Author

dmsnell commented May 30, 2017

I don't know what dangling combiners are

haha, nothing fancy, just a trailing on those combined strings. for example, because we're not doing the semantic join on a list of things, we are doing string concatenation with an extra at the end. this dangling bleeds through to the other functions/levels of abstraction

a=foo b=bar vs a=foo b=bar vs [a, foo], [b, bar] -> "a=foo b=bar" (at the caller instead of the callee)

All I was trying to point out was that most likely you can achieve the same result and benefits using function composition (via flow) and reduce.

yep. computationally the same 😄 see transducers for robust libraries doing just this…

transduce(
	filterer( key => realAttributes[ key ] !== undefined ),
	mapper( key => `${ key }="${ realAttributes[ key ] }"` )
)

if this somehow improves some of the functionality of the parser that is great

don't think it does. doesn't matter much at all. I would consider it a win for code maintainability in the sense that it communicates the intent more clearly of what we want to do and also in that it provides easier hooks into the "transformation pipeline," if you will, whereas the single reduce() couples those concerns together to where you have to modify the entire thing to inspect/transform a single step

@BE-Webdesign
Copy link
Contributor

Yup transducers and Clojure for the win.

@aduth
Copy link
Member

aduth commented May 31, 2017

Before was fine and trivially more performant. This is still fine and trivially more functionally explicit. I don't see a net change in objective "better"-ness here, and am therefore indifferent to it.

The reduce could be changed to accumulate into an array and joined, which would solve the dangling space while still maintaining O(n) (O(2n) if you count join as iterating).

@dmsnell
Copy link
Member Author

dmsnell commented Jun 1, 2017

I'm closing this out because the discussion was helpful and the intended purpose. Thanks y'all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[Feature] Parsing Related to efforts to improving the parsing of a string of data and converting it into a different f
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants