Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interop with async? (Tokio/Futures) #2

Closed
jsgf opened this issue Nov 16, 2016 · 75 comments
Closed

Interop with async? (Tokio/Futures) #2

jsgf opened this issue Nov 16, 2016 · 75 comments
Assignees
Labels
enhancement Improvement of existing features or bugfix

Comments

@jsgf
Copy link

jsgf commented Nov 16, 2016

Hi -

Have you put any thought into how this could be used with an async IO model, such as with Tokio? Right now it seems like it's all implicitly synchronous, but perhaps it's just a matter of making the various functions take/return Futures?

Thanks

@theduke
Copy link
Member

theduke commented Jul 11, 2017

The big thing here is that this would also enable an implementation of something like dataloader, which Facebook has deemed the best practice for handling remote data fetching for a GraphQL query.

Based on my (limited) experience with GraphQL it's the only sane way to support large/complex backends and multiple queries per request in a sane way.

@mhallin
Copy link
Member

mhallin commented Jul 12, 2017

I've made a few attempts locally at integrating futures-rs into Juniper, but there are a lot of stumbling blocks that need to be overcome. There are also some nice properties the current synchronous implementation has that we'll probably lose, particularly around memory usage and heap allocations. This might not be a big issue, but I like the "you don't pay for what you don't use" mentality of Rust and C++ in general.

So, these are some issues that have prevented me from building this. There are probably more :)

  • Many AST nodes, particularly identifiers, use references to the original query string to avoid copying. Rust statically guarantees that no AST nodes will outlive the query string, and it's easy to reason about when execute just parses and executes the query synchronously.
  • GraphQLType::resolve* would need to return a Box<Future>, which means that all fields will cause extra heap allocations, even if it's just field like_count() -> i32 { self.like_count }. Now that I think about it, maybe you could change FieldResult to something like enum { Immediate(Value), Err(String), Deferred(Box<Future<Item=Value, Error=String>>) }.
  • The core execution logic would need to be transformed into something that joins multiple futures together and emits a result. No fundamental problem here, it was just a very difficult programming task :)
  • The futures-rs crate changing around a bit while working on this. This shouldn't be the case anymore, but I would be weary releasing Juniper 1.0 while depending on a pre-1.0 release of futures-rs.

I've been dabbling with futures-rs and Tokio in another project that I'm working on, and the general feeling I get is that's it's pretty unergonomic to use. I've stumbled upon cases where I couldn't break out a big and_then callback to a separate function because of various reasons: either the type was "untypeable" - i.e. containing closures, BoxFuture requires Send despite the name, Box<Future> did not work because of lifetime issues, and even when switching to nightly and returning impl Future there have been cases where returning lifetimes have been an issue.

Despite all of this, I still think this a feature we want to have! :) However, there are some constraints on the implementation:

  • Minimal impact on the non-async path. Ideally, Juniper should only create a new heap-allocated future for each object to resolve, not each field. Scalars should not cause any overhead at all.
  • No reliance on unstable compiler features. Juniper should not require a nightly compiler.

This became a long response with little amount of actual content :) I might open up a branch to do some work on this, but it's been hard trying work in incremental pieces since it's such a cross-cutting change.

@Mange
Copy link

Mange commented Jul 12, 2017 via email

@mhallin
Copy link
Member

mhallin commented Jul 12, 2017

I maybe expressed myself a bit ambiguous there - if a user defines an async field, than it should obviously allocate a future for that field. I meant that no futures should be allocated for synchronous fields.

@theduke theduke added the enhancement Improvement of existing features or bugfix label Jul 24, 2017
@srijs
Copy link
Contributor

srijs commented Sep 27, 2017

@mhallin @theduke In your opinion, how feasible is it to ship this?

Lack of async support is currently the thing that prevents me from using Juniper for more projects. I appreciate that all in all it would be a huge refactor, but maybe this is something that could be shipped in increments?

I've been gathering a lot of futures/tokio experience lately; would you be motivated to help me get the PRs reviewed and merged if I got started on this? Or do you feel like it's not the right time for this feature?

@mhallin
Copy link
Member

mhallin commented Sep 30, 2017

@srijs Just out of curiosity, do you already have a futures-based Rust codebase that you want to expose with Juniper, or are you looking at GraphQL servers with async support in general?

To be honest, the more i work with Promises/futures-rs and other concurrency systems such as Erlang/Go, the less interested I am working on this. Just compare the work with integrating futures-rs with say Rayon: it would be trivial making the execution logic parallel without running into any of the problems I listed in my first reply here. If the Rust community's async efforts were directed to something with that kind of API, I'd be more interested.

That said, I will of course help you out if you decide to tackle this! I'm not sure even where to begin, but keeping AST nodes alive for the duration of the entire execution without using references with lifetimes is something that needs to be solved first. That might require putting all nodes under Arc, which is kind of unfortunate...

@srijs
Copy link
Contributor

srijs commented Oct 3, 2017

@mhallin I do have existing Rust codebases (that use tokio extensively) where I would love to be able to use Juniper.

It seems with generator functions/coroutines we're moving into the direction you're describing, but of course that will not be usable in stable Rust for quite a while. So while I agree that it would be simpler by leveraging coroutines, I don't think it's practical to wait. At least personally I'd like to see Juniper support async before coroutines land in Rust stable.

I have started to play around with the AST/parser parts, to use reference-counted string objects as you suggested (although I'm not 100% they would need to be Arc; Rc might be sufficient). Servo is using tendril for this purpose, but it seems rather unstable at the moment. I wonder if there is a crate like bytes just for text, or whether we could actually use bytes for this purpose...

@dcabrejas
Copy link

dcabrejas commented Apr 8, 2018

Hi, can someone give me an update on this, I recently started a project using Juniper but the data to fulfil the GraphQL request comes for an external web api so I need to take advantage of futures for performance. Is futures support being worked on, if so by when will it be ready? if not, how do people go about doing what I am trying to do without using async I/O ?

Thanks

@thedodd
Copy link

thedodd commented Apr 17, 2018

Mostly in response to, or in addition to, @mhallin's comment above, here are some thoughts based on the recent advances on the rust futures & async front, especially drawing off of these sources:

heap allocations

With conservative impl trait and, less importantly, universal impl trait slated for rust 1.26, we could experiment with field signatures having a return type of impl Future<...> as opposed to Box<Future<...>>, to save on heap allocations (we'll see when it actually lands on stable).

Per some comments above about reference issues even inside of a return impl Future<...>, perhaps the upcoming Pin API will be useful.

AST sharing

With futures-rs 0.3, the Pin API will be leveraged for futures, which will allow reference sharing between futures &c. I haven't looked at the code in Juniper where the AST sharing is taking place, so the Pin API may or may not make any difference. Need to investigate a bit more.

futures stability

futures-rs has just recently gone through a pretty massive revamp with the 0.2 release. Apparently the design is more long term, with a 0.3 coming soon:

we anticipate a 0.3 release relatively soon. That release will set a stable foundation for futures-core, after which we can focus on iterating the rest of the stack to take full advantage of async/await!

So, once 0.3 lands, that may be a good time to start experimenting more aggressively.

Thoughts?

@dvic
Copy link
Contributor

dvic commented Jul 24, 2018

FYI, futures 0.3.0-alpha.1 has been released.

@thedodd
Copy link

thedodd commented Aug 4, 2018

Yes! It would be so awesome to have async GraphQL in Rust! I’ve built GraphQL servers in Node, Python, Go & Rust; and Rust’s Juniper is by far the most powerful and ergonomic to work with. Adding async support will be the crown jewel, IMHO 🙌

I think now would be a great time for some futures integration, for sure.

@divoxx
Copy link

divoxx commented Oct 15, 2018

By the way, I'm not sure if this has been proposed/discussed, but other libraries implemented support for dataloader by using thunks and a breadth-first resolution, without the need to introduce async code into the graphql executor.

The golang graphql library PR explains the approach and contains other references as well: graphql-go/graphql#388

Would this be worth pursuing? Should this be added as a separate issue?

@liuchong
Copy link

Will this async juniper comming soon? 😹

@tomhoule
Copy link
Member

I looked into how this could be implemented a bit. My understanding is that we would need to return futures instead of results in a few places if we want to keep the changes minimal, for example resolve_field. This is a good example because it is a trait method, so we currently would have to box the returned future, which means one allocation per field. As far as I understand, returning unboxed futures from trait methods is blocked on GAT and existential types/async trait methods.

With async/await and async methods in traits, it looks like it would be possible to generate code for async resolution without boxing a future per field, but it blocks this feature until at least a few months after async_await lands (existential types are not working at the moment, even on nightly, as far as I know).

@theduke
Copy link
Member

theduke commented Dec 30, 2018

True, we will also need to figure out how to deal with the query AST. We will need to either put it behind an Arc or clone the sub-AST when neccessary (or do a hybrid approach for fan-outs).

The boxing issue may not be that bad if we can come up with an API that's transparently upgradable to a non-boxing solution, which should be possible.

Also, considering that async resolvers will only be necessary for fields that do DB or api requests the boxing should not be significant since the small allocation will be dwarfed by the actual workload.

The bigger challenge may be

  • possibly restricting the concurrency for certain fields
  • easily support batching with something similar to https://github.com/facebook/dataloader
  • tuning the tokio runtime to handle the workload correctly. (EG: always use blocking for non-async resolvers, or having a dedicated thread pool for the sync resolvers, etc.
  • prevent users from blocking the runtime by making the docs very clear on when to use async and when not to (and when users might manually use the blocking primitives)

We should also be aiming at basing this on futures 0.3 instead of 0.1.

@thedodd
Copy link

thedodd commented Mar 11, 2019

@theduke yea, I would imagine that cloning an Arc would be more performant than cloning the query AST or subsections thereof (could be wrong on that though).

Agreed on the boxing issue.

Do you mind expounding a bit on "possibly restricting the concurrency for certain fields"?

As far as tokio runtime tuning, using blocking and such, I do have a few thoughts for discussion:

  • I don't know if we should do anything other than strongly encourage folks to understand that if they are not using an async return value (a future) then they must be very careful not to block the runtime.
  • For folks that understand this, then the overhead of delegating to a background thread via the blocking API may be overkill and will add some performance hits (as minimal as they may be).
  • For folks that do not understand this, they can still easily block the runtime even in async handlers, by using thread blocking locks or making synchronous network calls. So arguably the only real solution is to teach and help folks understand.

So, perhaps a nice balance may be to teach non-blocking in the docs fairly well, include examples of how to use the blocking API for situations where such a thing is needed (eg, dealing with legacy crates), and then just encourage folks not to do any blocking operations when resolving scalar values &c.

Also, +1 on futures 0.3. The compatibility layer is pretty solid as well, so that shouldn't cause any issues.

@theduke
Copy link
Member

theduke commented Mar 21, 2019

I was hoping to wait with all this until the whole async ecosystem stabilizes and we see a new tokio release with std Future and async/await support but things are taking a long time sadly.

@jkoudys
Copy link

jkoudys commented Apr 24, 2019

Futures just stabilized in 1.36 - rust-lang/rust#59739 (as 0.3)

Are we confident we now have an API to start building out juniper resolvers as Futures?

@thedodd
Copy link

thedodd commented Apr 24, 2019

@jkoudys & @theduke looks like with the 1.36 stabilization & the fact that tokio has interop and a strong compatibility layer between futures 01 and std futures, now might be a great time.

For async/await, we will have to be on nightly ... but we could always ship a preview branch which requires nightly, and just not merge it until everything is on stable.

Something of note on async/await, it seems that the async patterns are pretty well settled down, but the await syntax is still in the air. We could just use the nightly await! macro for now and then once the final await syntax lands, we can just switch over.

@theduke
Copy link
Member

theduke commented Apr 25, 2019

I don't think await will be all that useful for the code in Juniper itself, since we will mostly be dealing with joins and a few manually written futures, so I think it's fine to go ahead now.

In addition to what I posted above, the big questions we need to answer are:

the FieldResult type

We certainly want to allow a mix of async and non async resolvers. EG it makes no sense to resolve fields that are just a struct field access with a future. In that light, the FieldResult type will have to become an enum that can contain both a sync response and a async one. And both the macro auto-generated and other plumbing will have to deal with resolving sync resolvers directly and async ones via a future, and then wrap everything in a future when appropriate.

Runtime

There is also the question of how and when a threadpool will need to come into play. Preferably we would remain runtime agnostic, and not rely on anything provided by tokio et al. I'm not sure that's possible but it probably should be. But that would also mean no threadpool and letting the user decide what things are expensive and need to be delegated to a threadpool.

With futures we can now add timeouts to resolvers in a sensible way, but that might require a specific runtime.
If we need a runtime, we should wait until tokio is refactored to `0.3.

Concurrency limits

Currently everything happens one after the other, synchronously.
With futures we could have a lot of work items fired off simultaneously.

Just imagine a API request that fires of 10 database requests and 5 network fetches.
Currently, they happen one after the other in a thread.

With futures we might have 30 requests arriving at once, all started at the same time, leading to 300 db and 150 network requests. And new requests arriving all the time. I guess this could overwhelm the runtime pretty quickly and cause a ballooning of workload because every single request takes longer.

So I think some limiting system will be essential. Eg: no more than X actively processed requests at a time. Additional ones get put in a queue and are only resumed once another request finishes.

Also timeouts come into play here.

I think this will be trial and error. Let's whip something up and see how it works out with benchmarking. But it is something to consider.

I'm pretty certain the limit/queuing will need to happen.

Introspection / tracing

I don't know much about the tracing functionality of tokio (https://github.com/tokio-rs/tokio/tree/master/tokio-trace), but this might be a very nice thing to have to provide insight into your workload and point out slow / problematic resolvers. Investigating this would be nice.

Let's take it step by step.

Is anyone interested in working on this?

@tomhoule
Copy link
Member

The threadpool part could be hard to abstract, so it would make sense to leave setting up the runtime and handling blocking tasks to users of the library in my opinion. For example the tokio threadpool handles blocking tasks differently from most others (the blocking function).

@thedodd
Copy link

thedodd commented Apr 25, 2019

But that would also mean no threadpool and letting the user decide what things are expensive and need to be delegated to a threadpool.

@theduke & @tomhoule yea, I definitely agree. As long as their blocking/background threadpool operation returns a future, then we should be good. We shouldn't have to do anything else to handle it.

As far as Concurrency Limits:

  • on one hand I'm tempted to say that we should allow users to horizontally scale out their instances as they need to on their own. Allow them to load balance traffic across their instances, and then we just operate on a best effort basis. If we impose limitations on our side, then we might actually inhibit scalability for folks that have other ways of managing this sort of stuff (like using kubernetes & HPA).
  • on the other hand, perhaps we could introduce some sort of complexity limitations. Something that is opt-in. A few other GraphQL frameworks in other languages have something like this. We could offer depth complexity limits and maybe offer parallel resolution limits. However, once again, the tokio runtime can handle a pretty massive number of concurrent futures/tasks. Perhaps introducing a limitation is an over-optimization here which might actually dwarf the capabilities of the underlying system. Maybe something optional would be best.

Thoughts?

@dsilvasc
Copy link

For concurrency limits, does juniper already batch requests within a query? If not, it might be useful to see how the Sangria graphql server does it with what they call deferred values, deferred resolvers, and fetchers:

https://sangria-graphql.org/learn/#deferred-value-resolution
https://sangria-graphql.org/learn/#high-level-fetch-api

@bsmedberg-xometry
Copy link

Is there anything I can do to help with this project? I have a personal and professional interest in moving this forward, and potentially some time to spend on it.

@LegNeato
Copy link
Member

Hello @bsmedberg-xometry , long time no see! Would love help as we have been a bit swamped lately. This work is happening on the async-await branch.

The ideal short-term help would be to implement union and interface support there. That entails porting logic over from the macros into proc_macros. It's a bit hairy...not because the work is hard but because macros are annoying to work with.

There is a WIP progress PR for async subscription support but it isn't touching the union and interface stuff so you shouldn't really conflict.

@dfrankland
Copy link

Is there a need for more examples? I was able to successfully use the async-await branch with tide: https://github.com/dfrankland/tokei-aas

I'd be happy to add an example using that as the basis.

@theduke
Copy link
Member

theduke commented Oct 28, 2019

Small update: I refactored the graphql_union! macro to a juniper::union proc macro.
graphql_interface! will follow soon.

No async support yet, that's the next step, but a fairly small one.

@davidpdrsn
Copy link
Contributor

@theduke Will graphql_union! be deprecated? I'm currently using it in juniper-from-schema.

@theduke
Copy link
Member

theduke commented Oct 29, 2019

Yes, I'm replacing all the macros with proc macros.
The proc macros have equivalent functionality, but a slightly different syntax.

@theduke theduke unpinned this issue Oct 31, 2019
@theduke theduke pinned this issue Oct 31, 2019
@theduke
Copy link
Member

theduke commented Nov 7, 2019

Just an additional update: I've been traveling a lot for the past few months, but starting this Saturday, I will have 2 weeks downtime.

I will push hard to have a alpha release with async await by next Friday.

@theduke
Copy link
Member

theduke commented Nov 13, 2019

Is there a need for more examples? I was able to successfully use the async-await branch with tide: https://github.com/dfrankland/tokei-aas

I'd be happy to add an example using that as the basis.

@dfrankland what would be extremely helpful is building out a full-featured benchmark with a complex schema in https://github.com/graphql-rust/juniper/tree/async-await/juniper_benchmarks . ( this is currently very rudimentary )

@LegNeato
Copy link
Member

@theduke Seeing as we are telling folks to base their PRs on the aync-await branch, let's just get CI passing and merge that into master so we can jam on it. I don't think we'll do another sync-only release and if we need to we can re-branch at the pre-async point.

@theduke
Copy link
Member

theduke commented Nov 15, 2019

👍 , that's what I wanted to suggest.

@LegNeato
Copy link
Member

LegNeato commented Jan 21, 2020

This is landed on master! 🎉

There is still some cleanup work to do (and the book tests don't pass) but all other tests seem to be working with both the async feature and without it.

@rivertam
Copy link

Should this issue not be closed then?

@kiljacken
Copy link

I believe that it wouldn't make sense until async support is in a release on crates.io

@LegNeato
Copy link
Member

LegNeato commented Mar 6, 2020

Interfaces still don't work and we need to rip out the sync code. Keeping this open until those are done and a release is made.

@repomaa
Copy link

repomaa commented Jul 11, 2020

Hey! Thanks for the awesome work! I managed to integrate master with dataloader-rs! I noticed that the batch loading isn't working on deeper levels of the tree though. The following example will make it clear:

query {
  recipes {
    name
    ingredients {
      ingredient { name }
      amount
    }
  }
}

This will result in the following db queries (all of which are done by dataloader batch loading functions).

SELECT name FROM recipes
SELECT ingredient_id, amount FROM recipe_ingredients WHERE recipe_id IN ($1, $2, $3)
parameters: $1 = '3', $2 = '1', $3 = '2'
SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1 = '1'
SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1 = '2'
SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1 = '4'

So the the second level (recipe -> recipe_ingredients) of resolvers is batched but the third isn't (recipe_ingredient -> ingredient). This could just as well be a bug in dataloader-rs or even more likely just my incompetence, but i thought I'd post it here if someone has come across this and solved it already.

EDIT:

Ok it seems that if i set data_loader.with_yield_count(very_high_number) it will successfully batch the third level as well. But this results in very long running requests (a second or so).

@LegNeato
Copy link
Member

LegNeato commented Jul 11, 2020

Do you have a link to code?

@repomaa
Copy link

repomaa commented Jul 11, 2020

@LegNeato sure: https://p.jokke.space/5l2FU/

@tyranron
Copy link
Member

tyranron commented Oct 6, 2020

With #682 landed, we're fully async compatible now!

@wongjiahau
Copy link

wongjiahau commented Oct 9, 2020

@tyranron Are there any examples?

@tyranron
Copy link
Member

tyranron commented Oct 9, 2020

@wongjiahau check the book on master and examples in the repository.

@wongjiahau
Copy link

I found the fix by using juniper = { git = "https://github.com/graphql-rust/juniper", rev = "68210f5"} instead of juniper = 0.14.2.

@LegNeato
Copy link
Member

crates.io has been updated with juniper's async support, sorry for the delay. Any future bugs or api changes can get their own issues.

Note that we still support synchronous execution via juniper::execute_sync.

Thank you to all the contributors who made this possible, especially @nWacky, @tyranron , @theduke , and @davidpdrsn 🍻 🎉 🥇

@tyranron tyranron unpinned this issue Dec 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvement of existing features or bugfix
Projects
None yet
Development

No branches or pull requests