-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
500 Number of resources limit and 1000000 Template size limit #2550
Comments
I think I fixed this, I can fix this by adding these codes to
I'm still doing some tests, I'll let you know about news |
Hey @MarlonJD, Thank you for raising this issue and sharing the alternative solution. We will incorporate the workaround instructions into the troubleshooting section of the documentation. Additionally, we are marking this as a bug for further evaluation by the team. |
Hello @AnilMaktala. Thanks for you reply. I'm glad to hear that, I'm glad there's a solution, even if it's temporary for now. |
Hello @AnilMaktala Not works old or new version, if it's creating from scratch. I created splitted NestedStack but I cannot move old resources to new nestedStacks. It's really important issue. If anyone can help with cdk and CloudFormation stacks resource moving it should work. It's only working if any new 500 or less stacks available for update. So I cannot create first time deploy. I can deploy like 3-4 part scheme. It's awful solution. I cannot migrate to Gen 2 for this. It's huge blocker for big project. I hope it's will fix soon. |
I am about to hit this limit. Is there any work around for this for now?
|
You can fix this error with custom stack mapping but it's not merged yet on gen 2, you have to modify the source and push manually for now, I hope it can fix soon |
@MarlonJD would you be able to kindly share your work-around for this? So you are saying that using this work-around makes it so that I can no longer "push" in 1 go and have to split it up into multiple deployments? |
Hey @LukaASoban,Which one are you using, Amplify Gen 1 or Gen 2? |
Gen 2 |
Hey @LukaASoban, You have to build manually and push on local when you do this workaround, it's because custom stack mapping not merged yet, but @AnilMaktala already created test version of this changes, so you can try this first, if it's works you could be do auto build, you need to use specific version of backend or edit on manually use these versions on your backend:
Then go to your Found this code:
and add
About the resolver names, in short term: Todo model has child it will be Todo
So just found hasMany fields on your models, add them until you're good about the resource limit. I didn't try this method in auto build, because I already using custom functions and it's not merged yet, so if you specify these version it could be run on auto build, if it won't work, I'll try to help you with local editing. |
Thanks @MarlonJD I will try this later. My only worry is that if and when the Amplify team fix it, will it cause me issues since they might not move forward with @AnilMaktala 's approach? |
@LukaASoban I didn't fully understand what you mean, but you can use this approach for now if you need immediately, if team will already released new fix in stable release, we can migrate that solution I think, I don't think it won't matter, because we just moving some stacks to another sub stacks. |
I am not an expert with the CDK but I guess I was worrying whether or not the LogicalIDs would be modified from moving to sub-stacks |
@MarlonJD I'm facing the same issue with Amplify Gen 2: Template may not exceed 1000000 bytes in size. To give you some context (as on September 16 2024): I really appreciate it if you could help me to solve this issue with Amplify Gen 2. Thanks in advance. |
Hey @vinothj-aa, it's been 3 months and there is no change in this issue. |
Do you mind sharing the steps and it will be very useful in our case of custom types? |
You can directly use this method #2550 (comment), I recommend to edit your local amplify with this pr aws-amplify/amplify-backend#1593 then add your resolvers to custom stack mappings, did you use stack mappings on gen 1? It's still the same you can check from here https://docs.amplify.aws/gen1/javascript/build-a-backend/graphqlapi/modify-amplify-generated-resources/#place-appsync-resolvers-in-custom-named-stacks If you have any issue while you trying to do this, I'll try to help again |
Sure, I'll check these out. Thank you for your response. |
@MarlonJD Steps that I followed:
Updated the amplify/data/resource.ts file with the following:
I did not face any issues or errors but the stack remains the same. Note: |
Hey @vinothj-aa, did you try manuel deploy? You're already under 500 limit now? What's the output can you share? Stack mapping may need more, I have nearly 70 models, but I have 50+ custom stack mapping for this. BTW you cannot do big changes in gen 2, @AnilMaktala and amplify team trying to fix this issue for a long time. You can build manually with this command:
|
@AnilMaktala I just created new PR same as yours but didn't add experimental, I hope we can get this parameter soon, we cannot use auto build for this. Any test should I create just tell me, I can try. |
Did you mean that I have to deploy manually on Amplify console using the above command? Here's the count: Backend Deployed Resources: 380 I'll be creating plenty more custom types/models as we just started the development. |
Please find the attached amplify_outputs.json file |
@vinothj-aa You can use auto build if you already using npm test version, what's the output of the build ? |
The build was successful but still no split stacks. |
@vinothj-aa did you checked from cloudformation ? It just splitting cloud formation stack. It's just solving the total number of resources issue, if you don't already have 500 resource limit or size limit, it will solve by splitting your stacks from cloud formation and you can update your backend. So now can you update your backend without an issue? |
I still get this warning: As you can see, the limit is almost reached and I'm unable to add more custom types, queries or mutations as it leads to the original issue still - Template may not exceed 1000000 bytes in size. Are you suggesting to add more resolvers to this section?
If yes then the issue is that I'm unable to add more custom types as the template size goes beyond 1000000 bytes. So, I'm not sure how to add more custom types, queries and mutations by splitting the stack. Right now, I'm adding an existing DynamoDB table like this:
Like the above code, I'm unable to add existing DynamoDB tables (I need to do this to use transactions, batch operations, etc) due to the template size error. |
@vinothj-aa Relations taking much counts as resolvers in models, I don't sure which type of resolvers increasing your resource, but you can check, You need to add new resolvers to stackMapping, you can found resolvers in this file, Search |
I got the resolver names from manifest.json file and included a few more in resource.ts file (to test if the mapping works).
When I try to deploy the changes, I get this warning:
Finally, the deployment fails with the same error:
My observation: Model based resolver (auto-generated by AppSync) - You can notice the mapped stack name (SplitCustomQPAdminStack) in the key
Whereas for a custom resolver, the mapped stack name is not part of the key
Does this stack mapping help in reducing the template size? The template size keeps increasing when I add more resolvers to experimentalStackMapping object. |
@vinothj-aa Can you try first 5 is SplitCustomQPAdminStack1, later then SplitCustomQPAdminStack2, maybe it's works. Sometimes these are not changing resource count or template size, it's odd to increase. Maybe splitting into 2-3 parts works. If it's allowing to update your backend you can ignore this time, if you're doing much changes in the backend split these into 2 or 3 parts, stack mapping just surpassing the error, we didn't found any solution for a long time. It does not matter you splitting into parts, it's giving nested stack issues, make small changes then update, then rest, you can try this. ie: I got 70 models, I made this backend with 9 parts, if you do major changes in one time it gives error, please try like this |
@MarlonJD
The remaining resolvers belong to custom types and those are not modified/affected by this approach. |
@AnilMaktala was working for solution, I think this splitting should be doing by automatically while updating because there is also nested stack limit and it's not controlling these, I hope it will solve in near future, but right now, if you can already can update your backend just make your changes small, I don't know what else we should do |
You are correct. I'm making small, incremental changes and I was cruising well before hitting the template size limit issue. Now, everything has come to a standstill. I'm exploring other options and I'll post updates if I'm able to fix this issue. |
I'm running into the same problem: [Warning at /amplify-xxxx/data] Template size is approaching limit: 925984/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size. [ack: @aws-cdk/core:Stack.templateSize] |
Hey there @LukaASoban @vinothj-aa @thomasoehri, it seems now we can disable some auto-generated resource, it may help us to decrease our resources and finally push without an error, is there anybody tried this #2559 (comment) |
Hi @MarlonJD, for me disabling all the auto-generated queries/mutations/subscriptions i have custom ones for reduced my template size from 925984/1000000 to 842567/1000000. |
Hey @thomasoehri what about resource count, is this also deceased? |
This is good news for people using models however, in our case we have just 1 model and the rest are custom types. As we add the datasources for custom types to Amplify data, the template size exceeds 1000000 bytes and our deployments failed. Perhaps, I'm able to come up with a workaround to unblock ourselves and I'll be happy to share my approach if anyone else is facing this issue with custom types. |
Hey @vinothj-aa happy to hear that you already fixed your issue, can you share with us how you could did this? |
Sure! Here are the steps:
Please let me know if you need more details. |
A small workaround to remove some pressure is to remove some of the unused resolvers. For example, if you are not using the subscriptions on the Todo: a
.model({
title: a.string().required(),
})
// available options: 'queries', 'mutations', 'subscriptions', 'list', 'get', 'create', 'update', 'delete', 'onCreate', 'onUpdate', 'onDelete'
.disableOperations(['subscriptions']), |
I tried this but this is only decreasing resource size, not resource count, we still need, |
Hello all, I have now reached this limit and i am not able to add anything else (but i need to).
@dpilch I have already disabled all the operations i could using |
How did you install the Amplify CLI?
nem
If applicable, what version of Node.js are you using?
v18.20.2
Amplify CLI Version
1.0.1
What operating system are you using?
MacOS
Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.
No
Describe the bug
I have 60 models. I'm trying to migrate to Gen 2.
npx ampz sandbox
command tried to push, then got this warning:then I tried to test reach limit and got error like this:
So, I was able to fix this limit issue by creating custom stacks for some resolvers like in documents, Place AppSync Resolvers in Custom-named Stacks.
And I could fix template size limit issue by doing
amplify push --minify
I couldn't do these in Gen 2. So how can we fix this?
Just info:
I saw this example to splitting stack with cdk, it may help this article
I saw people could solve this issue when they're using
aws-cdk
andserverless (framework)
by using split-stacks.Expected behavior
Should update stack
Reproduction steps
Creating big schema will give this error. I added some test models like this. If models have much relations, it will started to increase resource counts quickly.
Project Identifier
No response
Log output
Additional information
No response
Before submitting, please confirm:
The text was updated successfully, but these errors were encountered: