-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignored error from execute request leads to inserting duplicated objects #80
Comments
In memory solution could look something like this:
|
Or maybe not even filtering objects and just adding everything to cache? This may cause memory issues but as long as objects are faults it should be fine? |
Wow, thats a weird bug. I'd stick to the solution when we're splitting request into several requests in case there are more than 999 objects left. Than we can continue processing them as is it now. Thanks for reporting! |
Nice, thank you :) |
@jcavar Strange. I have implemented tests that should trigger error you've described but they pass. describe(@"limits", ^{
__block FEMManagedObjectCache *cache;
__block NSMutableArray *representation;
FEMMapping *mapping = [MappingProvider carMappingWithPrimaryKey];
NSInteger limit = 10000; By changing that limit variable you can tweak that limit. As you will see - no error produced and all tests are running perfectly. Going back to your issue: what environment are you using? watchOS, tvOS? p.s. in order to run tests please run |
Yes, you are actually right. This must be due to custom store we are using then. Sorry for bothering you, I should have checked with clean project first. I will investigate more and see what is causing this issue for us. Thanks for your help! |
Ok, it seems that issue reproduces with much bigger number 500001. This was found here: It is compile time constant and in our use case it is set to much smaller number for some reason. I will investigate why is that, but it would still make sense to handle this situation in library. |
I agree, that we need to handle that. The only thing I'm currently looking for is how and where to grab this value. Maybe persistent store will disclose it via options - gonna check it later. |
On the other hand this limit appears to be a quite big value. I would suggest you to subdivide your data into chunks instead of processing them in a single pass. Why so? In the past I was analyzing and it turned out that saving by batches runs faster than single big batch. Pseudocode:
See related question on stackexchange. On the other hand FEM should throw a detailed exception regarding why fetch has failed with recommendations. What do you think? |
Yeah, I think FEM should definitely expose error if it happens and stop processing. That is actually very good point, we should divide our data in reasonable chunks and increase variable count. I guess it is fine to leave that to caller to figure out as it has more info about data and it should know actual sql limit. |
Add temporary fix by 28b3ec3 To be honest this is a weird solution, however it turns out that for a proper error handling I need to rework a lot of existing API (also Swift brings additional challenges). Therefore I'm keeping this PR opened. |
Hi,
In FEMManagedObjectCache there is
- (NSMutableDictionary *)fetchExistingObjectsForMapping:(FEMMapping *)mapping
which executes fetch request.
We ran into issue when using this with NSSQLiteStoreType. The specific issue we have is when lookupValues is bigger then 999 which is max variable count for SQLite database.
See: https://github.com/mackyle/sqlite/blob/cec875a327f8d1924c60ccb20cc96d17c1868d33/src/sqliteLimit.h#L136
In this case error happens and empty array is returned.
Error Domain=NSSQLiteErrorDomain Code=1 "(null)" UserInfo={EncryptedStoreErrorMessage=too many SQL variables}
However, logic continues as there is nothing in database and inserts objects that may already be in database.
I think there are 2 ways to solve this problem. Either executing multiple requests with limit of 999 or fetching all objects and filtering them in memory.
Additionally, error should should be at least logged but probably exposed to caller as well.
The text was updated successfully, but these errors were encountered: