Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

base64 decoder could be 2x faster when decoding wrapped base64 #12114

Closed
jorangreef opened this issue Mar 29, 2017 · 9 comments
Closed

base64 decoder could be 2x faster when decoding wrapped base64 #12114

jorangreef opened this issue Mar 29, 2017 · 9 comments
Assignees
Labels
buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. performance Issues and PRs related to the performance of Node.js.

Comments

@jorangreef
Copy link
Contributor

Node's base64 decoder currently uses a fast decoder and a slow decoder.

The fast decoder decodes 32-bit words at a time. If it sees a line-break or whitespace or garbage, then it switches permanently to the slow decoder which decodes a byte at a time, with a conditional branch per byte, instead of per 32-bit word.

I did some rough benchmarking to compare decoding a 4mb random buffer encoded as base64, and decoding the same base64 but with CRLFs added every 76 chars as per MIME base64:

Decode Fast: 8ms
Decode Fast: 9ms
Decode Fast: 9ms
Decode Slow (wrapped every 76 chars): 30ms
Decode Slow (wrapped every 76 chars): 20ms
Decode Slow (wrapped every 76 chars): 22ms

As far as I can see, there's no reason to switch permanently to the slow decoder. If the fast decoder detects that the 32-bit word contains an invalid character, it could just decode the next few bytes byte-by-byte, and then switch back to fast mode as soon as it has consumed 4 valid base64 characters and outputted 3 bytes. This could all be a sub-branch after the 32-bit word check so it should not affect the performance of the fast decoder in any way.

For base64 decoding MIME data, this should nearly double the throughput since the slow case is triggered only every 76 bytes.

@mscdex mscdex added buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. labels Mar 29, 2017
@jorangreef jorangreef changed the title base64 decoder could be 2x faster when base64 data is wrapped base64 decoder could be 2x faster when decoding wrapped base64 Mar 29, 2017
@addaleax addaleax added the performance Issues and PRs related to the performance of Node.js. label Mar 29, 2017
@sathvikl
Copy link

can you please post the js code sample to test this.. I could probably take a look at it.

@jorangreef
Copy link
Contributor Author

jorangreef commented Mar 31, 2017

var crypto = require('crypto');
var data = crypto.randomBytes(32*1024*1024);
var base64 = '';
var now = 0;

var times = 3;
while (times--) {
  now = Date.now();
  base64 = data.toString('base64');
  console.log('Encode: ' + (Date.now() - now) + 'ms');
}

var times = 3;
while (times--) {
  now = Date.now();
  Buffer.from(base64, 'base64');
  console.log('Decode Fast: ' + (Date.now() - now) + 'ms');
}

function wrap(string) {
  var lines = [];
  var index = 0;
  var length = string.length;
  while (index < length) {
    lines.push(string.slice(index, index += 76));
  }
  return lines.join('\r\n');
}

base64 = wrap(base64);

var times = 3;
while (times--) {
  now = Date.now();
  Buffer.from(base64, 'base64');
  console.log('Decode Slow: ' + (Date.now() - now) + 'ms');
}

@aqrln
Copy link
Contributor

aqrln commented Mar 31, 2017

@jorangreef tried to do that, but the performance increase is only 10–11% for me: #12146.

aqrln added a commit to aqrln/node that referenced this issue Mar 31, 2017
The fast base64 decoder used to switch to the slow one permanently when
it saw a whitespace or other garbage character.  Since the most common
situation such characters may be encountered in is line-wrapped base64
data, a more profitable strategy is to decode a single 24-bit group with
the slow decoder and then continue running the fast algorithm.

Refs: nodejs#12114
@jorangreef
Copy link
Contributor Author

@aqrln I just finished https://github.com/ronomon/base64 which is an alternative C++ buffer-to-buffer encoder/decoder (you can decode a buffer containing base64 without allocating an interim string). If you take a look at the C++ binding source, it handles the slow case without switching permanently to slow mode. I also included a simplistic fast-slow.js benchmark in the source for this exact issue:


 Decoding Base64:

    Node: Decode: 42ms
    Node: Decode: 41ms
    Node: Decode: 42ms

  Base64: Decode: 45ms
  Base64: Decode: 44ms
  Base64: Decode: 46ms

 Decoding Base64 (wrapped every 76 characters):

    Node: Decode: 117ms
    Node: Decode: 119ms
    Node: Decode: 119ms

  Base64: Decode: 53ms
  Base64: Decode: 54ms
  Base64: Decode: 54ms

jasnell pushed a commit that referenced this issue Apr 4, 2017
The fast base64 decoder used to switch to the slow one permanently when
it saw a whitespace or other garbage character.  Since the most common
situation such characters may be encountered in is line-wrapped base64
data, a more profitable strategy is to decode a single 24-bit group with
the slow decoder and then continue running the fast algorithm.

PR-URL: #12146
Ref: #12114
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Trevor Norris <trev.norris@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
@jorangreef
Copy link
Contributor Author

Thanks @aqrln, just wanted to check that you were able to increase the performance by close to 100% in the end?

@aqrln
Copy link
Contributor

aqrln commented Apr 7, 2017

@jorangreef nope, I haven't worked on this since that PR. If you'd like to optimize it more to achieve the performance of your userland library, it would be really great. If not, then I could maybe take a look at it later.

@jorangreef
Copy link
Contributor Author

Thanks @aqrln, I hope you can run with it and get it all the way there - your c++ is probably better than mine. You are welcome to copy the decoder verbatim from my source if not.

@aqrln aqrln self-assigned this Apr 7, 2017
italoacasas pushed a commit to italoacasas/node that referenced this issue Apr 10, 2017
The fast base64 decoder used to switch to the slow one permanently when
it saw a whitespace or other garbage character.  Since the most common
situation such characters may be encountered in is line-wrapped base64
data, a more profitable strategy is to decode a single 24-bit group with
the slow decoder and then continue running the fast algorithm.

PR-URL: nodejs#12146
Ref: nodejs#12114
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Trevor Norris <trev.norris@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
@jorangreef
Copy link
Contributor Author

jorangreef commented Jul 25, 2017

Here's the latest on this with node v8.2.1.

Generated by https://github.com/ronomon/base64/blob/master/fast-slow.js:


 Decoding Base64:

    Node: Decode: 47ms
    Node: Decode: 49ms
    Node: Decode: 48ms

  Base64: Decode: 46ms
  Base64: Decode: 46ms
  Base64: Decode: 47ms

 Decoding Base64 (wrapped every 76 characters):

    Node: Decode: 110ms
    Node: Decode: 111ms
    Node: Decode: 112ms

  Base64: Decode: 56ms
  Base64: Decode: 57ms
  Base64: Decode: 53ms

The Base64 reference is at https://github.com/ronomon/base64.

@bnoordhuis
Copy link
Member

Seeing there's been no real movement on this in over a year, I'll go ahead and close this out. Pull requests still welcome, of course.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. performance Issues and PRs related to the performance of Node.js.
Projects
None yet
Development

No branches or pull requests

6 participants