Efficient character-based file parsing for csv and json formats.
-
Zero dependencies.
-
As fast as univocity or jackson.
-
Same API as clojure.data.csv and clojure.data.json implemented far more efficiently.
user> (require '[charred.api :as charred])
nil
user> (charred/read-json "{\"a\": 1, \"b\": 2}")
{"a" 1, "b" 2}
user> (charred/read-json "{\"a\": 1, \"b\": 2}" :key-fn keyword)
{:a 1, :b 2}
user> (println (charred/write-json-str *1))
{
"a": 1,
"b": 2
}
If you are reading or writing a lot of small JSON objects the best option is to create a specialized parse fn to exactly the options that you need and pass in strings or char[] data. A similar pathway exists for high performance writing of json objects. The returned functions are safe to use in multithreaded contexts.
The system is overall tuned for large files. Small files or input streams should be setup with :async?
false
and smaller :bufsize
arguments such as 8192 as there is no gain for async loading when the file/stream is smaller than 1MB.
For smaller streams slurping into strings in an offline threadpool will lead to the highest performance. For a particular
file size if you know you are going to parse many of these then you should gridsearch :bufsize
and :async?
as
that is a tuning pathway that I haven't put a ton of time into. In general the system is tuned towards larger
files as that is when performance really does matter.
All the parsing systems have mutable options. These can be somewhat faster and it is interesting to look at the tradeoffs involved. Parsing a csv using the raw supplier interface is a bit faster than using the Clojure sequence pathway into persistent vectors and it probably doesn't really change your consume pathway so it may be worth trying it.
Before running a REPL you must compile the java files into target/classes. This directory will then be on your classpath.
scripts/compile
Tests can be run with scripts/run-tests
which will compile the java and then run the tests.
MIT license.