Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parse base64-encoded data URIs more efficiently #10434

Closed
wants to merge 1 commit into from
Closed

Conversation

silby
Copy link
Contributor

@silby silby commented Dec 4, 2024

(in some places)

Very long data: URIs in source documents are causing outsized memory usage due to various parsing inefficiencies, for instance in Network.URI, TagSoup, and T.P.R.Markdown.source. See e.g. #10075.

This change improves the situation in a couple places we can control relatively easily by using an attoparsec text-specialized parser to consume base64-encoded strings. Attoparsec's takeWhile + inClass functions are designed to chew through long strings like this without doing unnecessary allocation, and the improvements in peak heap allocation are significant.

One of the observations here is that if you parse something as a valid data: uri it shouldn't need any further escaping so we can short-circuit various processing steps that may unpack/iterate over the chars in the URI.

The code here is organized a little bit randomly and I expect it needs more work to go in.

Good improvements though. The peak heap allocation for html -> markdown for the example provided by @ebeigarts in #10075 goes down from ~2900MB to ~2500 MB on my computer, and markdown -> json goes from 1577 MB to 73 MB! As discussed in the issue comments HTML reading has, at least, a TagSoup issue.

(in some places)

Very long data: URIs in source documents are causing outsized memory
usage due to various parsing inefficiencies, for instance in
Network.URI, TagSoup, and T.P.R.Markdown.source. See e.g. jgm#10075.

This change improves the situation in a couple places we can control
relatively easily by using an attoparsec text-specialized parser to
consume base64-encoded strings. Attoparsec's takeWhile + inClass
functions are designed to chew through long strings like this without
doing unnecessary allocation, and the improvements in peak heap
allocation are significant.

One of the observations here is that if you parse something as a valid
data: uri it shouldn't need any further escaping so we can short-circuit
various processing steps that may unpack/iterate over the chars in the
URI.
Comment on lines +25 to +39
parseBase64String = do
Sources ((pos, txt):rest) <- getInput
let r = A.parse pBase64 txt
case r of
Done remaining consumed -> do
let pos' = incSourceColumn pos (T.length consumed)
setInput $ Sources ((pos', remaining):rest)
return consumed
_ -> mzero

pBase64 :: A.Parser Text
pBase64 = do
most <- A.takeWhile1 (A.inClass "A-Za-z0-9+/")
rest <- A.takeWhile (== '=')
return $ most <> rest
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two thoughts on this:

  1. Is attoparsec really necessary? Why not just Data.Text.takeWhile?
  2. My experience is that parsers like this, which just manipulate the input directly using getInput and setInput, are problematic in parsec because parsec doesn't realize that input has been consumed. I've had to use a regular parsec parser somewhere in there to make it realize this. One option is just something like count charsConsumed anyChar, and then you don't need to compute the end position manually...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. In this spot I can possibly just borrow the fast inClass function from attoparsec and use it with regular text takeWhile, will have to fiddle with it.
  2. will investigate, I took it for granted that I would make the parsec state happy by fiddling with the input as seen here but that was not based on deep understanding or rigorous analysis.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably fine to use attoparsec, but there might be a slight speedup if you can avoid it.

On 2: you could try putting this parser under many and see if parsec complains.

, textStr ";" <* trace "cool"
, (mconcat <$> many mediaParam)
, textStr "base64," <* trace "fine"
, parseBase64String
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we replace l.1835 with

   many1Char (satisfy (A.inClass "A-Za-z0-9+/"))
,  manyChar (char '=')

and get rid of the special T.P.Parsing.Base64 module. What is the impact on performance on the sorts of files that were problematic before? I'd like to try the simplest possible thing before worrying about possible optimizations.

jgm added a commit that referenced this pull request Dec 18, 2024
This patch borrows some code from @silby's PR #10434 and should
be regarded as co-authored.  This is a lighter-weight patch
that only touches the Markdown reader.

The basic idea is to speed up parsing of base64 URIs by parsing
them with a special path.  This should improve the problem
noted at #10075.

Benchmarks (optimized compilation):

Converting the large test.md from #10075 (7.6Mb embedded image)
from markdown to json,

before: 6182 GCs, 1578M in use, 5.471 MUT (5.316 elapsed), 1.473 GC (1.656 elapsed)

after: 951 GCs, 80M in use, .247 MUT (1.205 elapsed), 0.035 GC (0.242 elapsed)

For now we leave #10075 open to investigate improvements in
HTML rendering with these large data URIs.
@jgm
Copy link
Owner

jgm commented Dec 18, 2024

I borrowed some code from this in a more minimal commit, 1a8da4f
which leads to big improvements. Still need to investigate the html rendering side of things.

jgm added a commit that referenced this pull request Dec 18, 2024
This patch borrows some code from @silby's PR #10434 and should
be regarded as co-authored.  This is a lighter-weight patch
that only touches the Markdown reader.

The basic idea is to speed up parsing of base64 URIs by parsing
them with a special path.  This should improve the problem
noted at #10075.

Benchmarks (optimized compilation):

Converting the large test.md from #10075 (7.6Mb embedded image)
from markdown to json,

before: 6182 GCs, 1578M in use, 5.471 MUT (5.316 elapsed), 1.473 GC (1.656 elapsed)

after: 951 GCs, 80M in use, .247 MUT (1.205 elapsed), 0.035 GC (0.242 elapsed)

For now we leave #10075 open to investigate improvements in
HTML rendering with these large data URIs.

Co-authored-by: Evan Silberman <evan@jklol.net>
jgm added a commit that referenced this pull request Dec 18, 2024
This patch borrows some code from @silby's PR #10434 and should
be regarded as co-authored.  This is a lighter-weight patch
that only touches the Markdown reader.

The basic idea is to speed up parsing of base64 URIs by parsing
them with a special path.  This should improve the problem
noted at #10075.

Benchmarks (optimized compilation):

Converting the large test.md from #10075 (7.6Mb embedded image)
from markdown to json,

before: 6182 GCs, 1578M in use, 5.471 MUT, 1.473 GC

after: 951 GCs, 80M in use, .247 MUT, 0.035 GC

For now we leave #10075 open to investigate improvements in
HTML rendering with these large data URIs.

Co-authored-by: Evan Silberman <evan@jklol.net>
jgm added a commit that referenced this pull request Dec 19, 2024
Text.Pandoc.URI: export `pBase64DataURI`.  Modify `isURI` to use this
and avoid calling network-uri's inefficient `parseURI` for data URIs.

Markdown reader: use T.P.URI's `pBase64DataURI` in parsing data
URIs.

Partially addresses #10075.

Obsoletes #10434 (borrowing most of its ideas).

Co-authored-by: Evan Silberman <evan@jklol.net>
@silby
Copy link
Contributor Author

silby commented Dec 19, 2024

Harvested and improved by @jgm in other commits; closing

@silby silby closed this Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants