-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory utilization of resource objects #603
Comments
I have tried to keep the memory footprint of Annotations low, by making this a Lazy - do I understand from your comment that Lazy is still partly responsible for this overhead? I do see I did not do the same for UserData - wouldn't that be the main cause, can you confirm Lazy takes a considerable of memory too? If so, I guess we could have the private getter on both do instantiation, but since this would require careful locking (which Lazy handles for me), I am a bit cautious. |
Well, Annotations and UserData (which is obsolete anyway) should be as lightweight as possible, so I agree we should do something about it then! Did you have a fix already? |
With a requirement of careful locking, no I don't have a fix yet. Below is what I did for testing purpose.
|
If you used that structure, there is no need for the lazy, as you're covering things there. Can you share the unit test so we can see what else is in there? |
From a rough guess, these 1k are backing around 40 properties |
While doing load testing for search functionality of our FHIR server that doesn't support paging, I ran into a problem where it took half a minute to complete for a search that returns 30K records, and two obvious things observed are: (1) high memory utilization which reached over 1G when deserialization and serialization memory allocations are taken into account, and (2) considerable amount of time it took for the server to do JSON deserialization and serialization of data. I also wonder why it is significantly faster (about 1 second to complete) in the browser side (Chrome), which I think also does the JSON processing of the same amount of data. Yes, in real-world scenario, I think we could get rid of this kind of problem by putting limit in the number of results the server will handle, so as not to hold an enormous number of resources in memory. |
The unit test is just a simple instantiation of AuditEvent and assigning fixed strings to its properties (not all are used), adding to bundle as search match, and then repeating it 30K times. Since the strings are fixed, they are only allocated once in memory. |
Hi, I came across this issue and thought it was similar to the one I was experiencing. Hence I wanted to know the status of this issue and whether there was any workaround to prevent the large consumption of resource as well as clearing of the allocated memory. I also have a suspicion that it could be a problem with Newtonsoft library as even the conversion of the JSON to JObject using normal json parser was also shooting up the memory consumption. I am using 1.2.1 of the STU3 library. |
Re-looking at this one, you will find that there is more memory used in the serializer as it streams the content down into a string, then into the objects, so you'll have twice what you think during loading, plus however big the stream buffer is. |
Yes, memory consumption for the serializers/deserializers can be a problem, we've seen similar things while bulk loading. The only real solution is something me, @GinoCanessa and @brianpos are currently working on: adding generated serializers/parsers that are less flexible than the current parsing framework, but less memory intensive too. WIP. |
By the way, we can easily avoid the allocation of the Lazy(), by using the |
…tion-of-resource-objects #603 Avoid allocation of Lazy Annotations for each resource instance (for R3+)
In base class “Base”, memory allocation is initially done for UserData and Annotations, which may be unwanted for some cases. As a temporary fix, changing to dynamic allocation of memory during access reduced the amount of allocated memory, in our sample case of 30,000 audit event resources (154MB to 74MB).
The text was updated successfully, but these errors were encountered: