-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large entities causing memory overhead #2348
Comments
Do you any timeline in mind for live source fetching? So gatsby will become any application generator instead of static only, currently our application is more like an ecommerce site, I think we might not be able to use gatsby for this case. But I really love gatsby's concept, it's really beautiful. |
There's probably some low-hanging fruit for increasing efficiency — improving Gatsby's scalability will be a focus towards the latter part of this year and next year. Currently though, it sounds like you're just running into Node's built-in memory limits. If you run Gatsby like Preloading is by far the simplest way to do things and arguably the best as development & builds are much faster when data is local & it's easy for Gatsby to autogenerate the GraphQL schema. There are other harder ways of not making data local but it's not something that's been explored much. |
On a related point, is there no standardized way for a plugin (source plugin, I suppose) to extend the Graphql schema / resolvers directly, without simply adding preloaded nodes to the tree? In other words, a way to permit a custom graphql resolve logic for part of the schema -- but where it would still be executed and cached during the build time, not as some kind of live query. |
There is https://www.gatsbyjs.org/docs/node-apis/#setFieldsOnGraphQLNodeType It's generally suggested you use this only for adding fields that you want to have arguments (e.g. the "excerpt" field on "MarkdownRemark" let's you pass in a I think the right solution to this problem of "too much data" is a way to pull data fetching & schema creation into another process w/ a DB backing the data instead of everything being in memory. Watch this space :-) working on a hosted version of this. This way there's essentially no limit to the amount of data Gatsby can handle. |
Hey, closing out old issues. Please re-open if you have additional questions, thanks! Also, check out v2! We've vastly reduced memory usage + build speed in general. |
Recently I tried to create a
custom-source-plugin
to fetch data from my API and everything works great except for once I adjusted to recursively fetch data from every page, the array size getting bigger and memory reach out to more than 1gb, and after when it ready to createNode, the memory continues to increase exponentially until the app burst! So my question is, do we really have to preload everything?If yes, how can I improve the performance and efficiency? Or there any way to dynamically fetch the necessary data based on the request?
The text was updated successfully, but these errors were encountered: