Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Route Prefetching] Allow routes to request data in parallel #97

Closed
wants to merge 6 commits into from

Conversation

nickiaconis
Copy link

Updated to @stefanpenner's prefetch proposal:

Rendered.

The demo is especially insightful (and has been updated to match the new RFC).
http://nickiaconis.github.io/ember-parallel-model-demo/

@rwjblue
Copy link
Member

rwjblue commented Oct 5, 2015

👍 - I love the goal and the general idea.

This is a major breaking change.

The API's that are designed in this RFC need to be completely backwards compatible or this change has to wait until Ember 3.0 (which is quite a long way off). FWIW, I believe that it is possible to make this migration in a backwards compatible way with enough thought and planning...

@Serabe
Copy link
Member

Serabe commented Oct 5, 2015

👍

What would happen if a transition is done in a beforeModel hook?


# Drawbacks

- This is a major breaking change.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a way of getting around the incompatibilities would be to create a new object type, that opts you into new semantics. There is a precedents with this type of feature introduction with GlimmerComponent. This could be something like ParallelizedRoute, NonblockingRoute, MyAPIIsSlowSoDoMoreThingsInParallelRoute, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe AsyncRoute

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a precedents with this type of feature introduction with GlimmerComponent.

Agreed. Definitely a possibility, but we (mostly @wycats, @tomdale, and @chancancode TBH) worked very hard to not need to make a different base class and discovered through that pain that it was unavoidable. It was definitely a last resort...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rwjblue yup.

I think a separate name, and provided via community add-on (with minimal hooks in internals) may be an escape valve. I want us to be careful not to cram ideas (good or bad) too quickly into ember. As once they are added, it is tricky and extremely costly to unwind.

We can look to several hasty (although well thought on 1.13.x features) as a demonstration of this.

I do believe the motivation here is good (and demonstrates real business value), the implementation may also. But it is well known the more users we can get using this in the wild the faster we can explore the problem space, and the more unknown unknowns we can squash before it becomes part of the finalized public api.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To help explore the problem space I'd love to get an early version in behind a feature flag so we have fewer unknowns. It seems like it would be a good fit to come up with an API for this sort of behavior at the same time as trying to solidify routable components so that they play nicely together. It also gives us a GlimmerComponent/"use strict"; type of escape valve for opting in if we can't come up with a better way to handle it.

The option of building an addon has community effects that will be quite painful. We see tight coupling between liquid-fire and Ember that an addon to accomplish this would make look positively decoupled. Self-trolling will occur all over the place.

@ef4
Copy link
Contributor

ef4 commented Oct 5, 2015

I'm good with the idea of running many routes in parallel. But I'm not ok with running all the hooks within a single route in parallel. beforeModel, model, and afterModel should still resolve in order, and each should wait for the one before. Otherwise afterModel becomes a terrible name that will trip people up.

If you want your model hook to start immediately, don't return a promise from your beforeModel hook.

FWIW, there's already a proposal in the routable components RFC to unify all three hooks and require model to be idempotent. I think that change and this change could go together and get triggered by the same opt-in upgrade.

@stefanpenner
Copy link
Member

There is also another approach, all hooks stay the same. But additional hooks (or ideally additional pre-fetch objects) are added purely for eager loading / prefetching. The aim is no to disrupt the existing programming model, but provide the same latency reduction this proposal aims to solve.

Something like:

export default Ember.Route({
  async prefetch(params) {
     // executed upfront, allowing each route to opt into pre-fetching semantics.
     // this should be thought of as nearly entirely declarative
     // one could imagine also providing graphQL esq (or actually) declarations, that can be composed at runtime.

     return {
       user: this.store.find('user', await this.prefetched('parent.route').user.id),
     }
  },

  model(params) {
    // .. executed normally, after the route is entered and the routes own prefetch has completed
    // relying on identify maps and such which have pre-fetched.
    return this.store.find('user', params.user_id)
  }
]);

This makes prefetching:

  • purely additive
  • opt-in
  • a separate DSL that aims to NOT expose existing zalgo API.
  • easy to migrate a code-base slowly, without totally parallel buy-in.

The algorithm would be as follows:

  1. begin routing
  2. recursively invoke all descendent routes prefetch assuming the developer has correctly de-zalgoified.
  3. begin entering routes, as there corresponding pre-fetch and parents pre-fetches have been invoked

This should:

  • provide the concurrency option that prevents unintentional sequential route waterfall.
  • work nicely given existing solutions and apps.

@ef4
Copy link
Contributor

ef4 commented Oct 5, 2015

@stefanpenner I think that whole idea can be done in userspace using public APIs once we ship the changes proposed in #95.
Your application route's beforeModel hook will receive a well-defined structure that tells you exactly what routes are about to get activated, and all their parameters. At that point it's easy to do something like call your own custom hook on each of the route classes.

I think that's a good solution for teams that are itching for a solution right away. We would still want to iterate to a fully baked-in solution that will Just Work.

@nickiaconis
Copy link
Author

@ef4 I agree with you that beforeModel, model, and afterModel should run sequentially or be replaced by a single idempotent hook.

Can you give an example of using the API from #95? I don't follow.

@stefanpenner Interesting idea to add an additional hook for eager loading. Would it rely on Ember Data caching to get the prefetched data in the model hook? I'm not sure I like that as it doesn't provide a mechanism for people using store.query or not using Ember Data at all.

@mmun
Copy link
Member

mmun commented Oct 6, 2015

The prefetch result could be made available in the transition object passed to the model hook, but it feels like a lot of hoops to jump through. I think prefetch is a very intuitive name though.

@nathanhammond
Copy link
Member

If you want your model hook to start immediately, don't return a promise from your beforeModel hook. -@ef4

This is the recommendation that we've given for a long time even with model hooks: if you want instant rendering effectively skip the hooks and reassign properties on the object you returned as promises resolve. We lose things like nice loading route behavior if we adopt that approach and it becomes unnecessarily complicated to know when you're able to do certain things (you have to pass around references to promises).

In my opinion the thing that distinguishes afterModel simply becomes that it has a reference to the model promise passed in as an argument. (This will be more important if you have limited context to prevent state mutation.)

As more of a power user I'm of course in favor of running all the things in parallel, but I recognize that the consequences of that for the majority of the Ember ecosystem are nontrivial. Unification under routable components neatly solves this problem.

@nathanhammond
Copy link
Member

@stefanpenner The API you propose is interesting, but also assumes that the only async behavior we're dealing with is loading of data. It could also be writing to disk (NW.js) or a long-running process that we handed off to a WebWorker. Granted, those are less likely and we should aim for making sure that the happy path is optimized. Even in the world where we accounted for all use cases it would still likely make sense to have a data hook which can be invoked on all routes to build the data dependency tree.

For example, @nickiaconis implemented multiplexing on top of Ember Data already which is one of the primary motivations for this RFC: collating across route boundaries. Discussions are being had about how to do client-side composition of data models in the vein of Falcor/GraphQL such that it is a fully-resolved data-dependency tree instead of still requiring round trips between tree "layers" (depth from root node(s)).

@stefanpenner
Copy link
Member

@stefanpenner Interesting idea to add an additional hook for eager loading. Would it rely on Ember Data caching to get the prefetched data in the model hook? I'm not sure I like that as it doesn't provide a mechanism for people using store.query or not using Ember Data at all.

Either a identity map, or other hooks (like model) could use this.prefetched API as well, to scoope data from the prefetched hooks

@stefanpenner The API you propose is interesting, but also assumes that the only async behavior we're dealing with is loading of data.

It could also be writing to disk (NW.js) or a long-running process that we handed off to a WebWorker. Granted, those are less likely and we should aim for making sure that the happy path is optimized.

This seems orthogonal.

Even in the world where we accounted for all use cases it would still likely make sense to have a data hook which can be invoked on all routes to build the data dependency tree.

the prefetch recommendation doesn't hinder this.

For example, @nickiaconis implemented multiplexing on top of Ember Data already which is one of the primary motivations for this RFC: collating across route boundaries. Discussions are being had about how to do client-side composition of data models in the vein of Falcor/GraphQL such that it is a fully-resolved data-dependency tree instead of still requiring round trips between tree "layers" (depth from root node(s)).

the prefetch recommendation doesn't hinder this.

@nathanhammond
Copy link
Member

@stefanpenner We could turn the whole prefetch hook into a DSL which we can serialize into Ember Data calls, lisp-like functional transformations that could be passed to the server a la GraphQL, or into vanilla Ajax calls. We just design/adopt a way to describe what data you need as a fully resolved tree and then let some serialization of that definition inform your DataAdapter "here is what I need." How it goes and gets that data is then entirely an implementation detail which makes the public API surface much easier to use. (No need to know about await this.prefetched or whatever we come up with.)

@stefanpenner
Copy link
Member

@nathanhammond yes I believe prefetch (or very like some more aptly named hooks are primitives to allow for this.

My motivation is to find some stepping stones which bridge the gap between declarative/prefetch ideas and the existing world. I believe the existing world has many merits and strengths, we would be wise not to fall into the trap of believing a drastic departure is required.

The problem as I see it takes the following shape:

  • GlobalOptimization === new & missing
  • local reasoning === existing & works tolerably well

It seems like both of these can complement each other nicely. In-fact it should be possible to minimize changes to the existing, while still reaping the rewards of a globally optimized system.

Some thoughts:

  1. the ideal DSL is still unknown, although the space is actively being explored (GraphQL et.al)
  2. generic pre-fetching based on declarative DSL is often possible and can provide high value
  3. generic pre-fetching in conjunction with turing complete language can fill in the remaining gaps, but can also be used as the primitive to allow users to explore the DSL specifically geared to the given task.
  4. pre-fetching in conjunction with an existing identify map provides an additional win.
  5. activate -> before -> model -> after -> setupController ... deactivate hooks provide a DSL for the per route domain, if 1-3 do there work 4's current latency concerns should be mitigated.

Taking the layered approach may prove some existing API / timings redundant, which means in 3.0 they can be removed. Drastic departure, of making everything concurrent feels quite hostile to existing applications, that may or may not benefit from this. In retro-spec the existing API's are fairly zalgo and would likely take different forms if implemented today.

@dgeb
Copy link
Member

dgeb commented Oct 6, 2015

We could turn the whole prefetch hook into a DSL which we can serialize into Ember Data calls, lisp-like functional transformations that could be passed to the server a la GraphQL, or into vanilla Ajax calls. We just design/adopt a way to describe what data you need as a fully resolved tree and then let some serialization of that definition inform your DataAdapter "here is what I need." How it goes and gets that data is then entirely an implementation detail which makes the public API surface much easier to use.

@nathanhammond This is almost precisely what I'm working on in Orbit.js, which will soon have a lisp-like customizable query expression language. More details here: orbitjs/orbit#212

This approach allows for complex, customizable, and synchronous queries of in-memory data, which can be asynchronously "hydrated" from remote sources. When used with Ember, this will allow many model hooks to be resolved synchronously while still asynchronously fetching data. The actual fetch calls could be batched, as you suggest, depending on the capability of remote sources (GraphQL, JSON API, etc).

Of course, there are still going to be remote calls that should block route loading, in which case a promise should be returned to the model hook. But for many cases, synchronous resolution of in-memory queries combined with asynchronous fetching of remote data will provide the best user experience.

@wycats
Copy link
Member

wycats commented Oct 6, 2015

Here's what I think we should do:

  1. Add a new behind-a-flag escape valve that allows a route to ask Ember not to wait for its async hooks to complete before continuing with the next steps.
  2. Add new behind-a-flag APIs (asyncModelFor, etc.) methods that allows a route to manually wait for a previous step to complete.

Once these behind-a-flag APIs are in place, we can get more real-world feedback on what patterns (other than the obvious ones) emerge, which we can feed into the design process for a new public API.

For what it's worth, I am very confident that the public version of this API requires async function to be ergonomic, which is at Stage 3 in TC39, supported by babel, and we are getting ready to adopt.

@tomdale
Copy link
Member

tomdale commented Oct 6, 2015

Came here to chime in that I think different routes should load in parallel but am opposed to beforeModel and afterModel becoming forced into async as well, though I see @ef4 has already said the same thing more eloquently than I could. It seems like there is consensus on that point, which is great.

@wycats I am a little nervous about putting the functionality in behind a flag if we know that is not the final, desired API. Mostly because people will likely come to rely on it, and it may be a large refactor to go from e.g. a flag on existing routes to an entirely new Route base class, as someone proposed above.

@wycats
Copy link
Member

wycats commented Oct 6, 2015

@wycats I am a little nervous about putting the functionality in behind a flag if we know that is not the final, desired API. Mostly because people will likely come to rely on it, and it may be a large refactor to go from e.g. a flag on existing routes to an entirely new Route base class, as someone proposed above.

I think we need to learn more from users who get real benefits from parallelism before we can design the API. Today, users in that boat are monkey-patching into private APIs, which I think you can agree have a worse version of the problem you're worried about than a few known flagged APIs.

@wycats
Copy link
Member

wycats commented Oct 6, 2015

TLDR, in TC39 terms: let's move this proposal to Stage 0 😉

@rwjblue
Copy link
Member

rwjblue commented Oct 7, 2015

I think we need to learn more from users who get real benefits from parallelism before we can design the API.

Agreed.

Today, users in that boat are monkey-patching into private APIs

When folks monkey patch private API's they are clearly taking on any future issues themselves. This is not true of feature flags that have landed in core (where folks report and expect bugs to be fixed by the magic OSS fairy).

TLDR, in TC39 terms: let's move this proposal to Stage 0

Agreed, making an addon that provides the strawman proposal sounds wonderful!

@nickiaconis
Copy link
Author

TLDR, in TC39 terms: let's move this proposal to Stage 0

Agreed, making an addon that provides the strawman proposal sounds wonderful!

There have been several suggested alternatives to the original proposal. What are we moving forward with? I can update the RFC to reflect what it is we want to be implemented as Stage 0.

@nathanhammond
Copy link
Member

I think we need to learn more from users who get real benefits from parallelism before we can design the API.

I'm not sure how to get our target audience (mid-size applications, moderately familiar Ember devs) to test this since everybody commenting on this thread has large apps and already knows the internals of the router. We're all terrible as case studies.

Agreed, making an addon that provides the strawman proposal sounds wonderful!

Our tradeoff:

  • Addon which can immediately be used by anyone using Ember without specifically setting a feature flag or really understanding the risks. Broader test base. (This would be implemented by monkeypatching to accomplish exactly what @nickiaconis has as a diff, except with more copypasta.)
  • Behind a feature flag on Canary for people who are already comfortable being on the bleeding edge and won't complain if things break. Smaller test base. (Maybe @nickiaconis can be the magic OSS fairy. 😄)

I feel like the second one has less of a risk of becoming the next vendor prefix debacle, but that's just me.

@wycats
Copy link
Member

wycats commented Oct 7, 2015

When folks monkey patch private API's they are clearly taking on any future issues themselves. This is not true of feature flags that have landed in core (where folks report and expect bugs to be fixed by the magic OSS fairy).

I don't feel like this is what happens. When an addon that uses private APIs becomes popular, it becomes just as much an entitlement as an opt-in flag, but more so.

@rwjblue
Copy link
Member

rwjblue commented Oct 7, 2015

I don't feel like this is what happens. When an addon that uses private APIs becomes popular, it becomes just as much an entitlement as an opt-in flag, but more so.

I respectfully disagree.

@wycats
Copy link
Member

wycats commented Oct 8, 2015

I respectfully disagree.

Liquid Fire users have not felt like they were to blame for using private APIs, while users of the unstable visit API absolutely expected churn. I'm not sure where the disagreement is coming from.

@rwjblue
Copy link
Member

rwjblue commented Oct 8, 2015

This is a great example, liquid-fire users clearly understand that issues they have/see are due to an issue with liquid-fire (there are very few issues reported to emberjs/ember.js that are due to issues in liquid-fire), while fastboot/visit API users are often reporting issues to Ember (rightfully so, as much of the code to blame for issues they are having is in Ember).

@nickiaconis
Copy link
Author

Ping.

TLDR, in TC39 terms: let's move this proposal to Stage 0

Agreed, making an addon that provides the strawman proposal sounds wonderful!

There have been several suggested alternatives to the original proposal. What are we moving forward with? I can update the RFC to reflect what it is we want to be implemented as Stage 0.

@nickiaconis
Copy link
Author

@stefanpenner -- The more I think about it, the more I like your idea of a prefetch hook. I'm going to update the PR based on that.

Also, to get a feel for it, I've taken a stab at implementing prefetch as an addon. It's almost working, but I've run into an issue. Router#willTransition isn't re-fired when a transition redirects. Any ideas how to work around that or maybe another Router hook from which to call the prefetch methods?


# Drawbacks

- Ember's API becomes larger.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may be confusing to the programming model

@stefanpenner
Copy link
Member

Im not quite sure (from reading this) what happens on refresh of a route, like when a QP changes and the model hook is intended to be run.

Or what happens on a link-to (with model, not id provided) pivoting on the route which has a prefetch.


async model() {
return {
OP: this.modelFor('post')).author,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extra parenthesis

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need linting for code blocks in markdown. 😛

@tomdale
Copy link
Member

tomdale commented Feb 5, 2016

@nickiaconis We discussed this in the core team meeting today and we had some concerns with the RFC as is. @stefanpenner wanted to make sure this didn't languish, so he asked me to write the concerns down. I speak only for myself but have tried to incorporate the problems other core team members identified. I'll start by laying the philosophical groundwork then get down to the merits of this specific RFC after.

For me, the high-level concern is this: as software libraries grow, there's a tendency to add additional API as new use cases (and deficiencies in the original design) are discovered. It is very tempting to keep merging small, targeted new APIs to solve pain points that developers are facing in the real world.

This is great for advanced users; learning a series of small changes is easy because it's amortized over months or years. For new users, though, it make the framework feel like an incoherent soup of complexity. They have a hard time creating a strong mental model and they may abandon the framework altogether.

Angular 1 is a good example of this: I see new users frequently confused by services, factories and providers. They solve similar problems and you have to understand their subtleties to know which one to use. And directives have so many knobs to turn (transclusion, scope, restrict, etc.) that many people consider them to be the domain of only power users.

The way to fix this is to periodically gather up all of the constraints, including the new ones discovered since the original feature was developed, and see if you can find a new Grand Unified Feature that solves all of the constraints with one concept instead of several. Many of us on the core team think that it is probably time for that to happen for model loading in routes.

Rather than adding a parallel prefetch() hook to supplement the serial model(), we would like to investigate if there is a new API that can solve the use cases for both with one concept/hook. Then we can stop telling people to use model() and use a new API that does the right thing when their models are serial and the right thing when they're parallel.

For example, perhaps we replace the model() hook with one called fetchModel(). It works like model() now except that they are called in parallel. In addition, we add a waitForModel() method that returns a promise for a parent route's model for the cases where you need it to be serial. That way, a serial route might look like this:

// app/routes/post/comments.js
export Route.extend({
  fetchModel() {
    return this.waitForModel('post').then(post => {
      return $.getJSON(`/posts/${post.get('id')}/comments.json`);
    });
});

This proposal almost surely has flaws that need to be ironed out, but it's an example of what I mean: rather than having a serial model pipeline and a parallel model pipeline, we try to find one pipeline that can do the right thing based on what the developer expresses inside.

I think the best way to move forward is to create a document that enumerates use cases from the real world. Offhand, that includes at least a model that depends on a parent, two models that don't have any dependencies, and a GraphQL example. Once we have a list of the use cases, we can see how the proposed API looks in context. As it is, though, I think the current RFC has way too little detail and consideration of all use cases for something that touches such a fundamental part of the programming model.

@stefanpenner
Copy link
Member

The way to fix this is to periodically gather up all of the constraints, including the new ones discovered since the original feature was developed, and see if you can find a new Grand Unified Feature that solves all of the constraints with one concept instead of several.

@tomdale If on every new discovery we have to re-invent the world, we end up with a different but also high cost learning hazard, as the paradigm shifts (rather then simply evolving it). As what happens (as evident in prior such re-works) the grass isn't actually any greener on the other side, instead a new fancy paradigm appears, shifting the problem space rather then reducing it.

We should first expand (carefully), before re-inventing the whole model.

I do not believe supporting this in core requires a rework of the entire programming model, in-fact with some more massaging of what is proposed here complements the existing model nicely.

My suggestion is that we massage this RFC, ensure the learnability/understandability address @tomdale concerns as best as possible, then let it ride the release trains, get feedback. A year from now, when we have time/energy/motivation to improve the routing DSL we can use additional knowledge it provides to make better and more informed choices in a redesign (if that is even required).


It seems like the mood is surprisingly negative, if that is the case. Maybe we should explore codifying the exact required missing events/API hooks to implement this well in user-space and call it a day. This would be unfortunate.

@nathanhammond
Copy link
Member

I'm a bit surprised at the degree to which you have concerns, @tomdale. We have a working implementation of this in production for LinkedIn, a popular addon implementing this RFC, and a pretty compelling demo. In addition, internally, @nickiaconis built a dynamically constructed request multiplexer on top of this simple prefetch primitive which will play nicely with our upcoming GraphQL-like tool we call Deco. All told, this little feature enabled multiple seconds worth of performance improvement for our application, and will go a long way toward dispelling the notion that Ember is slow.

Philosophically I disagree with you as I'm concerned that a 2 => 3 transition where we move to something like fetchModel will add just as much complexity with the added cost of breaking backwards compatibility. Or, if we don't break backwards compatibility, is identically as complex as this prefetch proposal. As this stands it simply provides an escape valve for parallel behavior, is entirely backwards compatible, and achieves the "stability without stagnation" goal. We've spent months honing this solution internally and no matter what, to solve this problem, we felt like we needed to expose parallel and sequential primitives to the framework user.

On the topic of "stability without stagnation," a non-blocking model hook has been a feature that people have been asking for since day one. So far the answer has been "just return a plain object and set properties on it later" which is pretty clearly an inferior version of this solution. Not providing a primitive for this behavior (which has been desired since before 1.0!) until the 3.0 timeline feels like a disservice to users of Ember. I truly feel like we should address this now.

Regardless, @tomdale, do you hold the same reservations about adding the underlying primitives necessary to build this in user space? Something like willChangeTransition is the only thing which has to be monkey-patched in, the rest just relies on private (but accessible) API surface area.

@mixonic
Copy link
Member

mixonic commented Feb 8, 2016

FWIW I'd like to see what a hook supporting the addon and use-case would look like. If it really is trivial, it could go straight to a PR and feature flag and likely be discussed/merged from that. Small win, I know, but potentially a win.

@tomdale
Copy link
Member

tomdale commented Feb 8, 2016

@nathanhammond Trying to amp up the sense of urgency is not going to be a winning strategy. The pain is real and I want to fix it. I am happy that you have a proof-of-concept that is working well for you. However, I've been doing this long enough now that I know that rushing in features to solve pain (no matter how acute) ends up being worse in the end, even if I can't yet guess how.

As @mixonic identified, the best path going forward is to figure out the minimum set of hooks we need to expose via public API to let an addon implement this in a forward-compatible way. That will help us eliminate the pain as rapidly possible while buying us time to have a thoughtful discussion about the programming model.

@tomdale
Copy link
Member

tomdale commented Feb 8, 2016

@stefanpenner

If on every new discovery we have to re-invent the world, we end up with a different but also high cost learning hazard, as the paradigm shifts (rather then simply evolving it). As what happens (as evident in prior such re-works) the grass isn't actually any greener on the other side, instead a new fancy paradigm appears, shifting the problem space rather then reducing it.

Maybe this is semantics, but I feel like my proposal is much more of an evolution of the current model than the prefetch proposal. It literally is creating a new hook that is semantically identical to model() except that it gets invoked in parallel instead of serially, and provides a helper to create a promise to wait for another route's models if you need that behavior. I'm not sure how this is "reinventing the world" and prefetch is "simply evolving it." To me, prefetch reinvents the world by providing an alternate universe of promise chains that is going to be hard for developers to reason about.

@tomdale
Copy link
Member

tomdale commented Feb 8, 2016

To be clear, the transition strategy for my proposal is:

  1. Go through your app and rename model() to fetchModel(). In 90% of cases, that is the only change needed.
  2. For routes that depend on other route's models, they refactor just that part to use async model fetching. It's "Just Promises™" so it uses a programming model they're familiar with already.

model() eventually goes away. There is no reasoning about whether you should use model() or prefetch(). There is no manual composing of promise chains required. There's no thinking about how model() and prefetch() may or may not interact. It's a "let is the new var" strategy—it's not reinventing the world, it's making it work the way it should have worked from the beginning with a small adjustment.

One thing I'm unclear on is if my proposal works well with GraphQL, which I think is going to end up being very important. That touches on a whole bunch of stuff like when promises are resolved, how loading routes work, etc. All of those things, by the way, are punted on by the prefetch proposal.

So I agree with @stefanpenner that rushing things in always leads us to realize later that the grass isn't greener on the other side. That applies just as much to prefetch as it does to my proposal, which is why I'm explicitly trying to apply the brakes to this and push back on everyone who wants to jam something in that clearly needs more bake time. Let's expose a hook, make an addon, and let it play out from there.

@stefanpenner
Copy link
Member

It literally is creating a new hook that is semantically identical to model() except that it gets invoked in parallel instead of serially, and provides a helper to create a promise to wait for another route's models if you need that behavior. I'm not sure how this is "reinventing the world" and prefetch is "simply evolving it." To me, prefetch reinvents the world by providing an alternate universe of promise chains that is going to be hard for developers to reason about.

I am surprised that you think this is semantically identical to model.

The current flow is as follows

first time we discover that this route will likely be required

  • init()

on each entering of a route

  • activate()
  • beforeModel()
  • model()
  • afterModel()
  • setupController()

finally on exit of the route.

  • deactivate();

To this prefetch adds the notion of "we will likely activate this route, you now have the opportunity to preemptively prepare yourselves" which enables pipelining, or other eager work.


In a perfect world, modelFor is made to return a promise, and all hooks are invoked concurrently. The promise chain the user describes is all the enforces any linearization. Pipelining happens by default, most people become happy.

Unfortunately, that would be a drastic departure from the current model (as we discussed) so an alternative approach that wasn't as dramatic was requested, this is that approach. I am surprised that now you are saying, we should instead take a more dramatic approach and re-work the model.

Should we instead propose that model? Such a model change will obvious cause churn, which is what I thought we were aiming to avoid...

@rwjblue
Copy link
Member

rwjblue commented Feb 8, 2016

Should we instead propose that model?

That is what @tomdale proposed I believe.

@stefanpenner
Copy link
Member

Should we instead propose that model?
That is what @tomdale proposed I believe.

In a meeting, we actually (when the PR was first introduced) decided it was to large a departure, and something less drastic would be required.

@nickiaconis
Copy link
Author

Should we instead propose that model?

That is what @tomdale proposed I believe.

In a meeting, we actually (when the PR was first introduced) decided it was to large a departure, and something less drastic would be required.

I'm at a loss for what is desired considering this already exists and was (rightly, I think) met with opposition: emberjs/ember.js#12415

I feel like this may, in large part, be an issue of naming.

To be clear, the transition strategy for my proposal is:

  1. Go through your app and rename model() to fetchModel(). In 90% of cases, that is the only change needed.
  2. For routes that depend on other route's models, they refactor just that part to use async model fetching. It's "Just Promises™" so it uses a programming model they're familiar with already.

This is the same transition strategy for the proposed prefetch hook. This presentation might give a clearer picture of that, and I apologize if I've overlooked including some pieces of that picture here due to my familiarity with the implementation. Please let me know what information is missing from this RFC, so I may include it.

@stefanpenner
Copy link
Member

@nickiaconis ya unsure.. Regardless, It is likely a good idea to get the required router/transition events/hooks in, this will aide your add-on and also aide this if it can be moved forward.

@nathanhammond
Copy link
Member

One aside, from random conversation: implicit prefetching or parallelization of any kind could accidentally trigger something ​_super_​ expensive on the backend which you only want to do it when absolutely required. (A thought in favor of explicitness, ignoring API.)

@stefanpenner
Copy link
Member

@nathanhammond yes. Cancellation may also help (at some point).

Due to the apprehension towards this... I believe may the best bet is to push the public API's needed to implement this safely as an add-on. Regardless those API's would be a big win, and if this does land in the future it would be aided by those hooks..

@nickiaconis
Copy link
Author

@stefanpenner That works for me. Do we need a separate RFC to suggest those public APIs or can we (I?) get started with an implementation?

@nathanhammond
Copy link
Member

@nickiaconis can you pull together a PR for willChangeTransitionand other public API requirements?

/cc @mixonic

@stefanpenner
Copy link
Member

I believe superseded by #126

@cibernox
Copy link
Contributor

@stefanpenner I fail to see how #126 cover this functionality.

@nathanhammond
Copy link
Member

@cibernox #126 creates the primitive necessary to implement prefetch in user space (as demonstrated by where ember-prefetch needs to monkeypatch Ember internals).

@hoIIer
Copy link

hoIIer commented Mar 16, 2017

late to the party, what is the general consensus of using prefetch in a new app (ember-prefetch)? one of my coworkers implemented it but then it broke the app because we have some control-flow logic that redirects around our auth wall in our ApplicationRoute.beforeModel, and now that logic is severed as prefetch tries to grab data before it is ever hit, but makes the http call without the auth information in the headers from ember-simple-auth (that package includes a mixin that adds the auth info via ApplicationRoute.beforeModel).

So I'm wondering if it's a good idea to use the prefetch package right now or wait until something comes that is part of ember core, which other addons will recognize?

@webark webark mentioned this pull request Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.