-
Notifications
You must be signed in to change notification settings - Fork 883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fall into an infinite loop when unpacking malformed data. #149
Comments
@nori0428 , Thank you for reporting the issue. I reproduced the problem. while(sz < size) {
sz *= 2;
} I'm not the original version author but I can imagine the intention of the code. The algorithm calculate the minimum size that is multiple of page size and includes 'size'. The goal seems to be efficiency. However, if the minimum size is not found, I think that the request size is appropriate size. If multiple of page size is mandatory, return NULL as your patch is good, but in this case, multiple of page size is not mandatory. The algorithm tries to find such minimum size, if found, return the minimum size, otherwise return request size: while(sz < size) {
size_t next_size = sz * 2;
if (next_size < sz) {
sz = size;
break;
}
sz = next_size;
} By the way, I found similar codes the following location:
They also should be fixed. |
@redboltz , thank you for your quick response. Certainly, similar codes you indicated are cause same problem. By the way, I hope that msgpack::unpacker etc. have a method to set size limitation. Now, msgpack::zone, sbuffer etc. try to allocate memory as much as possible. So if a size limitation API (or define) exists, we can handle it properly. Regards. |
I completely agree with you. A size limitation API is an important feature for protect a server. I will consider what kind of API is good. |
When receiving malformed msgpack data from network, unpacker.next falls into an infinite loop.
And this occurs only when sizeof(size_t) == 4.
Please see https://gist.github.com/nori0428/55a63422add3e956bf68
Regards.
The text was updated successfully, but these errors were encountered: