-
Notifications
You must be signed in to change notification settings - Fork 3
/
community.html
958 lines (730 loc) · 48.1 KB
/
community.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
<!doctype html>
<html>
<head>
<meta charset="utf-8"/>
<title>Community Tools</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" type="text/css" href="fpbench.css">
</head>
<body>
<header>
<a href='.' style='color: black; text-decoration: none;'>
<img src='img/logo.png' height='150' alt='FPBench Logo' />
<h1>FPBench Community Tools</h1>
</a>
<p>Numerics research tools and systems</p>
<ul>
<li><a href="index.html">Events</a></li>
<li><a href="education.html">Education</a></li>
<li><a href="community.html">Tools</a></li>
<li><a href="https://groups.google.com/a/fpbench.org/g/fpbench">Mailing List</a></li>
</ul>
</header>
<p> The floating-point research community has developed many tools that
authors of numerical software can use to test, analyze, or improve their code.
They are listed here in alphabetical order and categorized into three general
types: 1) <a href="#dynamic">dynamic tools</a>, 2) <a href="#static">static
tools</a>, 3) <a href="#commercial">commercial tools</a>, and 4)
<a href="#misc">miscellaneous</a>. You can find tutorials for some of these
tools at <a href="https://fpanalysistools.org/">fpanalysistools.org</a>, and
many of the reduced-precision tools were compared in a
<a href="https://doi.org/10.1145/3381039">recent survey</a> by authors from
the Polytechnic University of Milan. </p>
<p><em>Tools on the list are not endorsed, and need not endorse or
support the FPBench project.</em> The list is intended to be
exhaustive.</p>
<p>Additionally, <a href="https://github.com/FPBench/FPBench">the
FPBench project</a>
provides <a href="benchmarks.html">benchmarks</a>,
<a href="https://github.com/FPBench/FPBench/blob/main/tools.md">compilers</a>,
and <a href="spec/index.html">standards</a> for floating-point
research tools.</p>
<a name="dynamic">
<h2> Dynamic Tools </h2>
<dl class="dynamic-tools">
<dt>ADAPT</dt>
<dd>ADAPT (Automatic Differentiation Applied to Precision Tuning) is a tool
developed by <a href="https://w3.cs.jmu.edu/lam2mo/">Mike Lam</a> from JMU
and <a href="https://www.harshithamenon.com/">Harshitha Menon</a> from LLNL. It
uses the reverse mode of algorithmic differentiation to determine how much
precision is needed in a program's inputs and intermediate results in order
to achieve a desired accuracy in its output, converting this information
into precision recommendations.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/SC.2018.00051">"ADAPT: Algorithmic Differentiation Applied to Floating-point Precision Tuning"</a> (SC'18) </li>
<li> Source: <a href="https://github.com/llnl/adapt-fp">github.com/llnl/adapt-fp</a> </li>
</ul> </dd>
<dt>AMP</dt>
<dd>AMP (Automated Mixed Precision) is a tool developed by Ralph Nathan and
<a href="https://people.ee.duke.edu/~sorin/">Daniel Sorin</a> (Duke University)
for mixed-precision analysis. It begins with a single-precision version of a
program and uses a variety of metrics and heuristics to gradually raise certain
portions of the program to higher precision in order to achieve the desired
accuracy in the outputs. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/PRDC.2013.44">"Applying Reduced Precision Arithmetic to Detect Errors in Floating Point Multiplication"</a> (PRDC'13) </li>
<li> Paper: <a href="https://doi.org/10.1109/SC.2014.15">"Recycled Error Bits: Energy-Efficient Architectural Support for Floating Point Accuracy"</a> (SC'14) </li>
<li> Paper: <a href="https://arxiv.org/abs/1606.00251">"Profile-Driven Automated Mixed Precision"</a> (arXiv) </li>
</ul> </dd>
<dt>AMPT-GA</dt>
<dd>AMPT-GA is a tool developed by a multi-institutional team for automated
source-level mixed-precision for GPU applications, combining various static and
dynamic analysis approaches.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3330345.3330360">"AMPT-GA: Automatic Mixed Precision Floating Point Tuning for GPU Applications"</a> (ICS'19) </li>
</ul> </dd>
<dt><span style="font-variant: small-caps;">Atomu</span></dt>
<dd><span style="font-variant: small-caps;">Atomu</span> is a tool
developed by <a href="https://damingz.github.io">Daming Zou</a>
(Peking University) that uses "atomic" condition numbers of individual
floating-point operations to estimate the introduced error. It also performs
an evolutionary search to find inputs that trigger the highest condition
number for each operation. The instrumentation portion of the tool is
implemented as an LLVM pass. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3371128">"Detecting Floating-Point Errors via Atomic Conditions"</a> (POPL'20) </li>
<li> Source: <a href="https://github.com/FP-Analysis/atomic-condition">github.com/FP-Analysis/atomic-condition</a> </li>
</ul> </dd>
<dt>CADNA / PROMISE</dt>
<dd>CADNA (Control of Accuracy and Debugging for Numerical Applications)
estimates round-off errors by replacing floating-point arithmetic with
discrete stochastic arithmetic (DSA), essentially modeling error using
randomized rounding (up or down) instead of round-to-nearest-even. A new tool
called PROMISE (PRecision OptiMISE) uses this technique to perform a
delta-debugging search and report mixed precisions for C and C++ programs.
</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1016/j.cpc.2008.02.003">"CADNA: A Library for Estimating Round-off Error Propagation"</a> (CPC'08) </li>
<li> Paper: <a href="https://hal.science/hal-01331917">"Auto-tuning for Floating-point Precision with Discrete Stochastic Arithmetic"</a> (preprint) </li>
<li> Website (CADNA): <a href="http://cadna.lip6.fr/index.php">cadna.lip6.fr</a> </li>
<li> Website (PROMISE): <a href="http://promise.lip6.fr/">promise.lip6.fr</a> </li>
</ul> </dd>
<dt>CRAFT</dt>
<dd> CRAFT (Configurable Runtime Analysis for Floating-Point Tuning) was
originally a framework based on the <a
href="https://github.com/dyninst/dyninst">Dyninst</a>
binary analysis toolkit to perform various dynamic floating-point analyses,
including cancellation detection, dynamic range tracking, and automated
mixed-precision analysis. CRAFT can perform an automated search of a
program's instruction space, determining the level of precision necessary in
the result of each instruction to pass a user-provided verification routine
assuming all other operations are done in double precision. This gives a
general overview of the approximate precision requirements of the program,
providing insights and guidance for mixed-precision implementation. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1016/j.parco.2012.08.002">"Dynamic Floating-Point Cancellation Detection"</a> (PARCO'13) </li>
<li> Paper: <a href="https://doi.org/10.1145/2464996.2465018">"Automatically Adapting Programs for Mixed-Precision Floating-Point Computation"</a> (ICS'13) </li>
<li> Paper: <a href="https://doi.org/10.1177%2F1094342016652462">"Fine-Grained Floating-Point Precision Analysis"</a> (IJHPCA'16) </li>
<li> Source: <a href="https://github.com/crafthpc/craft/">github.com/crafthpc/craft</a> </li>
</ul> </dd>
<dd> There is a branch of CRAFT by Andrew Shewmaker (OpenEye Scientific,
formerly LANL) that adds a new analysis mode for extracting histograms of
all floating-point values encountered during a program run. He has also been
building a variant of this tool based on SHVAL (described below). </dd>
<dd> <ul>
<li> Source: <a href="https://github.com/agshew/craft">github.com/agshew/craft</a> </li>
<li> Source: <a href="https://github.com/agshew/shval">github.com/agshew/shval</a> </li>
</ul> </dd>
<dt>FloatSmith</dt>
<dd> FloatSmith is an integration project that provides automated
source-to-source tuning and transformation of floating-point code for mixed
precision using three existing tools: 1) CRAFT, 2) ADAPT, and 3) TypeForge. The
primary search loop is guided by CRAFT and uses TypeForge to prototype
mixed-precision versions of an application. ADAPT results can optionally be used
to help narrow the search space, and there are a variety of options to customize
the search. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/Correctness49594.2019.00009">"Tool Integration for Source-Level Mixed Precision"</a> (Correctness'19) </li>
<li> Source: <a href="https://github.com/crafthpc/floatsmith">github.com/crafthpc/floatsmith</a> </li>
</ul> </dd>
<dt>FPDetect</dt>
<dd> FPDetect is a tool from the University of Utah for synthesizing error
detectors for stencil applications. It begins with an offline phase where
affine analysis is used to tighly estimate the floating-point precision
preserved across computations for intervals of data points. This bound along
with an auxiliary direct evaluation technique is then utilized to
instantiate optimally situated detectors across the compute space when
executing online. Violations of this bound can be attributed with certainty
to errors such as soft-errors or logical bugs. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3402451">"FPDetect: Efficient Reasoning About Stencil Programs Using Selective Direct Evaluation"</a> (TACO'20) </li>
</ul> </dd>
<dt>FPDiff</dt>
<dd> FPDiff is a tool by Jackson Vanover (UC Davis) that performs automated
differential testing on mathematical library source code, extracting
numerical function signatures, synthesizing drivers, creating equivalence
classes of synonymous functions, and executing tests to detect meaningful
numerical discrepancies between implementations. </dd>
<dd> <ul>
<li> Paper: "Discovering Discrepancies in Numerical Libraries" (to appear, ISSTA'20) </li>
<li> Source: <a href="https://github.com/ucd-plse/FPDiff">github.com/ucd-plse/FPDiff</a> </li>
</ul> </dd>
<dt>FPED</dt>
<dd> FPED is a dynamic analysis that detects arithmetic expression errors
using MPFR and test set generation. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/s11227-020-03469-7">"Error detection of arithmetic expressions"</a> (J Supercomputing 2020)</li>
</ul> </dd>
<dt>FPGen</dt>
<dd> FPGen is a tool by <a href="https://huiguoo.github.io">Hui Guo</a>
(UC Davis) that uses symbolic execution to generate inputs that maximize numerical error,
finding more errors than S3FP or KLEE. </dd>
<dd> <ul>
<li> Paper: "Efficient Generation of Error-Inducing Floating-Point Inputs via Symbolic Execution" (to appear, ICSE'20) </li>
<li> Source: <a href="https://github.com/ucd-plse/fpgen">github.com/ucd-plse/fpgen</a> </li>
</ul> </dd>
<dt>fpPrecisionTuning</dt>
<dd> fpPrecisionTuning is a combination of two Python tools (C2mpfr and
DistributedSearch) that together can be used to find mixed-precision versions
of C programs. Like CRAFT and Precimonious, this tool performs a search over
the mixed-precision search space using a user-given error bound, but this tool
uses MPFR and source code modification to simulate non-standard precisions. The
authors have applied this toolchain to a variety of examples from different
domains including fixed-precision computations on an FPGA. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/ASPDAC.2017.7858297">"Efficient Floating Point Precision Tuning for Approximate Computing"</a> (ASP-DAC'17) </li>
<li> Source: <a href="https://github.com/minhhn2910/fpPrecisionTuning">github.com/minhhn2910/fpPrecisionTuning</a> </li>
</ul> </dd>
<dt>FPSpy</dt>
<dd> FPSpy is a tool by <a href="https://users.cs.northwestern.edu/~pdinda/">Peter Dinda</a>
(Northwestern) that uses several different forms of runtime analysis to
detect and measure various floating-point issues. It is implemented as an
<code>LD_PRELOAD</code> library and works on unmodified x64 executables,
monitoring floating-point exceptions and other events of interest to
report results with varying levels of granularity and overhead. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3369583.3392673">"Spying on the Floating Point Behavior of Existing, Unmodified Scientific Applications"</a> (HPDC'20) </li>
<li> Download: <a href="http://presciencelab.org/FPSpy/fpspy-1.0.tgz">fpspy-1.0.tgz</a> </li>
</ul> </dd>
<dt>GPU-FPtuner</dt>
<dd> GPU-FPtuner is a mixed-precision autotuner for GPU applications, built
on top of the Rose compiler framework. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/HiPC50609.2020.00043">"GPU-FPtuner: Mixed-precision Auto-tuning for Floating-point Applications on GPU"</a> (HiPC'20) </li>
<li> Source: <a href="https://github.com/raydongpub/GPU-FPtuner">github.com/raydongpub/GPU-FPtuner</a> </li>
</ul> </dd>
<dt>GPUMixer</dt>
<dd> GPUMixer is a tool written by a multi-institutional team for LLVM-based
mixed-precision analysis that is performance-focused, looking for potential
speedups first and then verifying their accuracy afterwards. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-030-20656-7_12">"GPUMixer: Performance-Driven Floating-Point Tuning for GPU Scientific Applications"</a> (ISC'19, best paper) </li>
</ul> </dd>
<dt>Herbie</dt>
<dd>Herbie automatically improves the accuracy of floating-point
expressions. Herbie has been used to fix two bugs in
<a href="https://mathjs.org/">math.js</a>, and it also provides a
<a href="https://herbie.uwplse.org/doc/latest/using-web.html">web interface</a>.
</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2813885.2737959">"Automatically Improving Accuracy for Floating Point Expressions"</a> (PLDI'15) </li>
<li> Website: <a href="https://herbie.uwplse.org">herbie.uwplse.org</a> </li>
<li> Source: <a href="https://github.com/herbie-fp/herbie">github.com/herbie-fp/herbie</a> </li>
</ul> </dd>
<dd> The authors of Daisy and Herbie have worked together to
combine their tools: </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-95582-7_21">"Combining Tools for Optimization and Analysis of Floating-Point Computations"</a> (FM2018)</li>
</ul> </dd>
<dt>Herbgrind</dt>
<dd>Herbgrind is a dynamic analysis tool for binaries (built on Valgrind)
to help find the root cause of floating-point error in large programs.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3192366.3192411">"Finding Root Causes of Floating Point Error"</a> (PLDI'18) </li>
<li> Website: <a href="https://herbgrind.ucsd.edu">herbgrind.ucsd.edu</a> </li>
<li> Source: <a href="https://github.com/uwplse/herbgrind">github.com/uwplse/herbgrind</a> </li>
</ul> </dd>
<dt>HiFPTuner</dt>
<dd> HiFPTuner is an extension of Precimonious (see below) that uses dependence
analysis and edge profiling to enable a more efficient hierarchical search
algorithm for mixed-precision configurations. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3213846.3213862">"Exploiting Community Structure for Floating-Point Precision Tuning"</a> (ISSTA'18) </li>
<li> Source: <a href="https://github.com/ucd-plse/HiFPTuner">github.com/ucd-plse/HiFPTuner</a> </li>
</ul> </dd>
<dt>pLiner</dt>
<dd> pLiner is a tool for automatically identifying code that will trigger
compiler-induced numerical variability. It works by combining selective
precision enhancements and a guided dynamic search, decreasing the amount
of time required to detect issues. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/SC41405.2020.00053">"pLiner: Isolating Lines of Floating-Point Code for Compiler-Induced Variability"</a> (SC'20) </li>
<li> Github: <a href="https://github.com/llnl/pliner">LLNL/pLiner</a> </li>
</ul> </dd>
<dt>PositDebug / FPSanitizer</dt>
<dd> PositDebug and FPSanitizer are tools from the Rutgers Architecture and
Programming Languages group. Both use LLVM-based instrumentation and MPFR
shadow values to check Posit (the former) and regular floating-point code
(the latter). This approach is faster than the Valgrind-based
instrumentation used by Herbgrind. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3385412.3386004">"Debugging and Detecting Numerical Errors in Computation with Posits"</a> (PLDI'20) </li>
<li> Source: <a href="https://github.com/rutgers-apl/PositDebug">github.com/rutgers-apl/PositDebug</a> </li>
<li> Source: <a href="https://github.com/rutgers-apl/fpsanitizer">github.com/rutgers-apl/fpsanitizer</a> </li>
</ul> </dd>
<dt>Precimonious</dt>
<dd> Precimonious is a tool written by <a
href="https://web.cs.ucdavis.edu/~rubio/">Cindy Rubio González</a> (currently UC
Davis) that leverages the <a href="https://llvm.org">LLVM</a> framework to tweak
variable declarations to build and prototype mixed-precision configurations. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2503210.2503296">"Precimonious: Tuning Assistant for Floating-Point Precision"</a> (SC'13) </li>
<li> Original source: <a href="https://github.com/corvette-berkeley/precimonious">github.com/corvette-berkeley/precimonious</a> </li>
<li> Newer source: <a href="https://github.com/ucd-plse/precimonious">github.com/ucd-plse/precimonious</a> </li>
</ul> </dd>
<dd> More recent work uses shadow value analysis to speed up the search. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2884781.2884850">"Floating-Point Precision Tuning Using Blame Analysis"</a> (ICSE'16) </li>
<li> Source: <a href="https://github.com/corvette-berkeley/shadow-execution">github.com/corvette-berkeley/shadow-execution</a> </li>
</ul> </dd>
<dt>PyFloT</dt>
<dd> PyFloT is a tool for mixed-precision tuning of applications using an
instruction-centric analysis that uses call stack information and temporal
locality to address the large scale of HPC scientific programs. PyFloT
supports applications written in a combination of Python, CUDA, C++, and
Fortran. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/SC41405.2020.00054">"Tuning Floating-Point Precision Using Dynamic Program Information and Temporal Locality"</a> (SC'20) </li>
<li> Github: <a href="https://github.com/hbrunie/PyFloT">hbrunie/PyFloT</a> </li>
</ul> </dd>
<dt>Shaman</dt>
<dd> Shaman is a library that uses operator overloading and error-free
transformations to track numerical error thorough computations by
replacing floating-point numbers with instrumented alternatives. It can
evaluate the number of significant digits of any floating-point number in
a computation and has been designed with a focus on high-performance
computating and simulation. It is compatible with mixed precision and
parallelism and does not interfere with the control flow of the program.
</dd>
<dd> <ul>
<li> Thesis: <a href="https://theses.hal.science/tel-03116750">"Compromise between precision and performance in high-performance computing"</a></li>
<li> Source: <a href="https://gitlab.com/numerical_shaman/shaman">gitlab.com/numerical_shaman/shaman</a></li>
<li> Web interface: <a href="https://repl.it/@nestordemeure/ShamanDemo?lite=true#main.cpp">repl.it/~nestordemeure/ShamanDemo</a></li>
</ul> </dd>
<dt>SHVAL</dt>
<dd> SHVAL (SHadow Value Analysis Library) is a tool by
<a href="https://w3.cs.jmu.edu/lam2mo/">Mike Lam</a> that uses the
<a href="https://www.intel.com/content/www/us/en/developer/articles/tool/pin-a-dynamic-binary-instrumentation-tool.html">Intel Pin</a>
instrumentation framework to perform full heavyweight shadow-value analysis for
floating-point arithmetic, effectively allowing developers to simulate any
real-numbered representation for which a plugin can be developed. </dd>
<dd> <ul>
<li> Paper: <a href="https://dl.acm.org/citation.cfm?id=3018826">"Floating-Point Shadow Value Analysis"</a> (ESPT'16)</li>
<li> Source: <a href="https://github.com/crafthpc/shval">github.com/crafthpc/shval</a> </li>
</ul> </dd>
<dd> This work was extended by Ramy Medhat (U. of Waterloo) to perform
per-address or per-instruction floating-point error analysis. Using this
framework, he developed several mixed-precision versions of an image processing
library, demonstrating how the tool can assist with making performance, energy,
and accuracy tradeoffs in the embedded domain. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3126519">"Managing the Performance/Error Tradeoff of Floating-point Intensive Applications"</a> (EMSOFT'17/TECS2017) </li>
<li> Source: <a href="https://github.com/ramymedhat/shval">github.com/ramymedhat/shval</a> </li>
</ul> </dd>
<dt>STOKE</dt>
<dd> STOKE is a general stochastic
optimization and program synthesis tool from Stanford that has been extended
to handle floating-point computation. STOKE can trade off accuracy for
performance for floating-point computations.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2666356.2594302">"Stochastic Optimization of Floating-Point Programs with Tunable Precision"</a> (PLDI'14) </li>
<li> Source: <a href="https://github.com/StanfordPL/stoke">github.com/StanfordPL/stoke</a> </li>
</ul> </dd>
<dt>Verificarlo</dt>
<dd> Verificarlo is a tool written by <a href="https://sifflez.org">Pablo de
Oliveira Castro</a> (Université de Versailles St-Quentin-en-Yvelines), Eric
Petit, and Christophe Denis (ENS Cachan). It leverages the <a href="https://llvm.org">LLVM</a>
framework for automated Monte Carlo Arithmetic analysis. </dd>
<dd> <ul>
<li> Source: <a href="https://github.com/verificarlo/verificarlo">github.com/verificarlo/verificarlo</a> </li>
</ul> </dd>
<dt>VeriTracer</dt>
<dd> Verificarlo (see above) has recently been extended by Yohan Chatelain,
including the addition of contextual information as well as a temporal
visualization tool called VeriTracer. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/ARITH.2018.8464687">"VeriTracer: Context-enriched tracer for floating-point arithmetic analysis"</a> (ARITH'18) </li>
<li> Source: <a href="https://github.com/verificarlo/verificarlo/tree/veritracer">github.com/verificarlo/verificarlo/tree/veritracer</a> </li>
</ul> </dd>
<dt>Verrou</dt>
<dd> Verrou is a Valgrind-based tool written by François Févotte and others from
EDT in France. The tool can simulate choosing a random rounding mode for every
operation, and performs a delta debugging search to find areas of code that fail
a user-provided verification routine under this scheme. It can also use the
Verificarlo front-end (see above) for lower overhead.</dd>
<dd> <ul>
<li> Paper: <a href="https://hal.science/hal-01383417">"Verrou: Assessing Floating-Point Accuracy Without Recompiling"</a> (unpublished) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-63501-9_5">"Studying the Numerical Quality of an Industrial Computing Code: A Case Study on Code_aster"</a> (NSV'17) </li>
<li> Source: <a href="https://github.com/edf-hpc/verrou">github.com/edf-hpc/verrou</a> </li>
</ul> </dd>
</dl>
<a name="static">
<h2> Static Tools </h2>
<dl class="static-tools">
<dt>Alive / LifeJacket</dt>
<dd>Alive verifies that LLVM IR optimizations don't change program
behavior, and includes support for floating-point instructions in
an extension named LifeJacket. Alive has verified 43 LLVM
optimizations and discovered eight incorrect ones.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2931021.2931024">"LifeJacket: verifying precise floating-point optimizations in LLVM"</a> </li>
<li> Source: <a href="https://github.com/4tXJ7f/alive">github.com/4tXJ7f/alive</a> </li>
</ul> </dd>
<dt>Alive</dt>
<dd>Alive-NJ verifies that LLVM IR optimizations don't change
program behavior, and includes support for floating-point
instructions. Alive-NJ has found five bugs in LLVM optimization
passes.</dd>
<dd> <ul>
<li> Source: <a href="https://github.com/rutgers-apl/alive-nj">github.com/rutgers-apl/alive-nj</a> </li>
</ul> </dd>
<dt>cohd / syhd</dt>
<dd> Laurent Thévenoux at LIP, ENS de Lyon has written tools for source-to-source
error compensation and synthesis of floating-point code. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1002/cpe.3953">"Automatic source-to-source error compensation of floating-point programs: code synthesis to optimize accuracy and time"</a> (CPE'16) </li>
<li> Paper: <a href="https://doi.org/10.1109/CSE.2015.11">"Automatic Source-to-Source Error Compensation of Floating-Point Programs"</a> (CSE'15) </li>
</ul> </dd>
<dt>Daisy</dt>
<dd> Daisy is another tool by <a href="https://people.mpi-sws.org/~eva/">Eva
Darulova</a> (the author of Rosa, see below) that combines various types
of static and dynamic analysis of floating-point code and is designed to be
modular and extensible. Eva also has a paper with Anastasiia Izycheva on
computing sound relative errors. They find that this can produce much
tighter error bounds than analyzing absolute error, and an implementation is
included in Daisy. Finally, Eva has a paper with Einar Horn and Saksham
Sharma on mixed-precision tuning in Scala and C, which is also included in
Daisy.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-030-25543-5_11">"Sound Approximation of Programs with Elementary Functions"</a> (CAV'19) </li>
<li> Paper: <a href="https://doi.org/10.1109/ICCPS.2018.00028">"Sound Mixed-Precision Optimization with Rewriting"</a> (ICCPS'18) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-89960-2_15">"Daisy - Framework for Analysis and Optimization of Numerical Programs (Tool Paper)"</a> (TACAS'18)</li>
<li> Paper: <a href="https://dl.acm.org/citation.cfm?id=3168462">"On Sound Relative Error Bounds for Floating-Point Arithmetic"</a> (FMCAD'17) </li>
<li> Source: <a href="https://github.com/malyzajko/daisy">github.com/malyzajko/daisy</a> </li>
</ul> </dd>
<dd> The authors of Daisy and Herbie have worked together to
combine their tools: </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-95582-7_21">"Combining Tools for Optimization and Analysis of Floating-Point Computations"</a> (FM2018)</li>
</ul> </dd>
<dt>dco / scorpio</dt>
<dd> A group from CERTH & University of Thessaly and RWTH Aachen have written
a significance analysis system using a combination of interval arithmetic and
automatic differentiation. The software does not yet seem to be publicly
available.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2854038.2854058">"Towards Automatic Significance Analysis for Approximate Computing"</a> (CGO'16) </li>
</ul> </dd>
<dt>ESBMC</dt>
<dd> The Efficient SMT-based Bounded Model Checker (ESBMC) is an SMT-based
model checker for single- and multi-threaded C/C++ programs with support for
floating-point arithmetic. </dd>
<dd> <ul>
<li> Paper: "An Efficient Floating-Point Bit-Blasting API for Verifying C Programs" (to appear, NSV'20) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-70848-5_7">"Encoding Floating-Point Numbers Using the SMT Theory in ESBMC: An Empirical Evaluation over the SV-COMP Benchmarks"</a> (SBMF'17) </li>
<li> Website: <a href="http://esbmc.org">esbmc.org</a> </li>
</ul> </dd>
<dt>Flocq</dt>
<dd> Flocq is a floating-point formalization for Coq, written by
<a href="https://pages.saclay.inria.fr/sylvie.boldo/">Sylvie Boldo</a> and
others. It provides a library of theorems for analyzing arbitrary-precision
floating-point arithmetic. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/ARITH.2011.40">"Flocq: A Unified Library for Proving Floating-Point Algorithms in Coq"</a> (SCA'11) </li>
<li> Book: <a href="https://shop.elsevier.com/books/computer-arithmetic-and-formal-proofs/boldo/978-1-78548-112-3">"Computer Arithmetic and Formal Proofs"</a> (Elsevier) </li>
<li> Website: <a href="https://flocq.gitlabpages.inria.fr">flocq.gitlabpages.inria.fr</a> </li>
</ul> </dd>
<dt>FPhile</dt>
<dd> Floating-point stability analysis via SMT solving and theorem proving. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/SYNASC.2013.35">"Verifying (In-)Stability in Floating-point Programs by Increasing Precision, using SMT Solving"</a> </li>
</ul> </dd>
<dt>FPSyn</dt>
<dd> FPSyn is an implementation of a floating-point predicate synthesis
technique that automatically derives an expression to calculate the
round-off error bound of the input computation. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-031-15077-7_3">"Synthesis of Rigorous Floating-Point Predicates"</a> (SPIN'22)</li>
</ul> </dd>
<dt>FPTaylor / FPTuner</dt>
<dd> A group at the University of Utah describes a way of using partial
derivatives and symbolic Taylor series expansions to bound floating-point
roundoff error; this achieves much tighter bounds than interval/affine/SMT
methods. More recently, they have extended theird work by performing a broad
comparison of many error bounding analyses. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-19249-9_33">"Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions"</a> (FM2015) </li>
<li> Paper: <a href="https://doi.org/10.1145/3230733">"Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions"</a> (TOPLAS2019) </li>
<li> Source: <a href="https://github.com/soarlab/FPTaylor">github.com/soarlab/FPTaylor</a> </li>
<li> Web interface: <a href="https://monadius.github.io/FPTaylorJS/">github.io/FPTaylorJS</a> </li>
</ul> </dd>
<dd> They have also built a mixed-precision tool for Python code based on this
technique. Note that it also requires the commercial
<a href="https://www.gurobi.com/solutions/gurobi-optimizer/">Gurobi Optimizer</a>. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3009837.3009846">"Rigorous Floating-Point Mixed-Precision Tuning"</a> (POPL'17) </li>
<li> Source: <a href="https://github.com/soarlab/FPTuner">github.com/soarlab/FPTuner</a> </li>
</ul> </dd>
<dt>Gappa</dt>
<dd>Gappa helps automated reasoning about floating-point and
fixed-point arithmetic. It has been used to prove the CRlibm
implementations of elementary functions correct.</dd>
<dd> <ul>
<li> Website: <a href="https://gappa.gitlabpages.inria.fr">gappa.gitlabpages.inria.fr</a> </li>
</ul> </dd>
<dt>moTuner</dt>
<dd> moTuner is a framework built on LLVM for efficiently tuning
mixed-precision operators during compilation. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3528416.3530231">"moTuner: a compiler-based auto-tuning approach for mixed-precision operators"</a> (CF'22) </li>
<li> Source: <a href="https://github.com/MoZeWei/moTuner">github.com/MoZeWei/moTuner</a> </li>
</ul> </dd>
<dt>POP</dt>
<dd>POP (Precision OPtimizer) is a tool written by Dorra Ben Khalifa
(Laboratory of Mathematics and Physics, University of Perpignan Via Domitia)
that uses abstract interpretation to determine the minimal precision
required on inputs. The primary current contribution is the application of
these techniques to the Internet of Things (IoT) domain.</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/IINTEC48298.2019.9112133">"Precision Tuning and Internet of Things"</a> (IINTEC'19) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-030-46902-3_5">"POP: A Tuning Assistant for Mixed-Precision Floating-Point Computations"</a> (FTSCS'19) </li>
</ul> </dd>
<dt>PRECiSA / FPRoCK</dt>
<dd> PRECiSA (Program Round-off Error Certifier via Static Analysis) is a
tool from the NIA and NASA that calculates symbolic error
bounds using the <a href="https://pvs.csl.sri.com">PVS</a> language. The authors
also provide a web interface. More recently, the FPRoCK tool was written in
collaboration with authors at the University of Utah to solve mixed real and
floating-point programs, which improved the accuracy of PRECiSA. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-66266-4_14">"Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis"</a> (SAFECOMP2017) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-73721-8_24">"An Abstract Interpretation Framework for the Round-Off Error Analysis of Floating-Point Programs"</a> (VMCAI 2018, preferred citation) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-030-20652-9_25">"A Mixed Real and Floating-Point Solver"</a> (NFM 2019) </li>
<li> Source: <a href="https://github.com/nasa/PRECiSA">github.com/nasa/PRECiSA</a> </li>
</ul> </dd>
<dt>Real2Float</dt>
<dd> Victor Magron and others present a framework for providing upper bounds on
absolute roundoff errors using semidefinite programming and sums of squares
certificates. The results can be checked using the Coq theorem prover. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3015465">"Certified Roundoff Error Bounds Using Semidefinite Programming"</a> (TOMS'17) </li>
<li> Source: <a href="https://github.com/afd/real2float">github.com/afd/real2float</a> </li>
</ul> </dd>
<dt>Rosa</dt>
<dd> Rosa is a source-to-source translator written by <a
href="https://people.mpi-sws.org/~eva/">Eva Darulova</a> (now at MPI-SWS,
previously EPFL) for the Scala language. It uses an SMT solver to annotate a
program with mixed-precision types. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3014426">"Towards a Compiler for Reals"</a> (TOPLAS'17) </li>
<li> Source: <a href="https://github.com/malyzajko/rosa">github.com/malyzajko/rosa</a> </li>
</ul> </dd>
<dt>Salsa</dt>
<dd> Salsa is an abstract-interpretation-based floating-point analysis tool that
suggests transformations to minimize floating-point error, and is the dissertation
work of Nasrine Damouche. The source code is not available.
</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.29007/j2fd">"Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs"</a> (AFM'17) </li>
<li> Paper: <a href="https://doi.org/10.1007/s10009-016-0435-0">"Improving the numerical accuracy of programs by automatic transformation"</a> (STTT'17, preferred citation) </li>
<li> Thesis: <a href="https://theses.hal.science/tel-01455727/">"Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods"</a> (2016)</li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-19458-5_3">"Intra-procedural Optimization of the Numerical Accuracy of Programs"</a> (FMICS'15) </li>
</ul> </dd>
<dt>Satire</dt>
<dd> Satire is a sound rounding error analysis and estimator that uses
several techniques like path strength reduction and bound optimization to
improve the scalability of static analysis. This analysis scales to hundreds
of thousands of operations. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/SC41405.2020.00055">"Scalable yet Rigorous Floating-Point Error Analysis"</a> (SC'20) </li>
<li> Github: <a href="https://github.com/arnabd88/Satire">arnabd88/Satire</a> </li>
</ul> </dd>
<dt>TAFFO</dt>
<dd> TAFFO is an LLVM-based conversion and autotuning framework that tries
to replace floating-point operations with fixed-point operations to the
extent possible. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/LES.2019.2913774">"TAFFO: Tuning Assistant for Floating to Fixed Point Optimization"</a> (LES'19) </li>
<li> Website: <a href="https://github.com/HEAPLab/TAFFO">github.com/HEAPLab/TAFFO</a> </li>
</ul> </dd>
<dt>VP Float</dt>
<dd> VP Float is an extension of the C type system to represent
variable-precision floating-point arithmetic. The extension is built using
LLVM and has two backend code generators, one for a RISC-V coprocessor and one
for the GNU MPFR multi-precision library. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3410463.3414660">"VP Float: First Class Treatment for Variable Precision Floating Point Arithmetic"</a> (PACT'20, poster) </li>
</ul> </dd>
</dl>
<a name="commercial">
<h2> Commercial Tools </h2>
<dl class="commercial-tools">
<dt>Astrée</dt>
<dd> Astrée is a commercial static analysis tool for proving the correctness
of C code, including all floating-point computation. It has been used to
prove the absence of runtime exceptions in flight control software for
Airbus planes. </dd>
<dd> <ul>
<li> Website: <a href="https://www.absint.com/astree/index.htm">absint.com/astree/index.htm</a> </li>
</ul> </dd>
<dt>Fluctuat</dt>
<dd> Fluctuat is a commercial, abstract-interpretation-based static analysis tool
for analyzing numerical code in C or Ada and characterizing the errors
caused by the approximation inherent in floating-point arithmetic. Fluctuat
has been used to verify safety-critical embedded systems. It was originally
written by <a
href="https://www.lix.polytechnique.fr/~goubault/">Eric Goubault</a>, <a
href="https://perso.univ-perp.fr/mmartel/">Matthieu Martel</a>, and <a
href="https://www.lix.polytechnique.fr/Labo/Sylvie.Putot/">Sylvie Putot</a>.
</dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/3-540-45927-8_14">"Propagation of Roundoff Errors in Finite Precision Computations: A Semantics Approach"</a> (ESOP'02) </li>
<li> Paper: <a href="https://doi.org/10.1007/3-540-45927-8_15">"Asserting the Precision of Floating-Point Computations: A Simple Abstract Interpreter"</a> (ESOP'02) </li>
<li> Paper: <a href="https://doi.org/10.1007/978-3-642-38856-9_1">"Static Analysis by Abstract Interpretation of Numerical Programs and Systems, and FLUCTUAT"</a> (SAS'13) </li>
<li> Website: <a href="https://www.lix.polytechnique.fr/Labo/Sylvie.Putot/fluctuat.html">lix.polytechnique.fr/Labo/Sylvie.Putot/fluctuat.html</a> </li>
</ul> </dd>
<dt>Numalis Software Suite</dt>
<dd> Numalis Software Suite is a collection of several commercial tools for
static and dynamic analysis of floating-point code with the goal of
generating more accurate codes and making mixed-precision decisions,
targeted primarily at AI and neural network domains. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/ISSREW.2017.40">"An Overview of Numalis Software Suite for Reliable Numerical Computation"</a> (ISSREW'17) </li>
<li> Website: <a href="https://numalis.com">numalis.com</a> </li>
</ul> </dd>
</dl>
<a name="misc">
<h2> Miscellaneous </h2>
<dl class="misc-tools">
<dt> CompCert </dt>
<dd> CompCert is a compiler that has been formally verified to preserve
floating-point semantics using Coq. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/s10817-014-9317-x">"Verified Compilation of Floating-Point Computations"</a> (JAR 2015) </li>
<li> Website: <a href="https://compcert.org">compcert.org</a> </li>
</ul> </dd>
<dt> Fbench / Ffbench </dt>
<dd> Fbench and Ffbench are two floating-point performance benchmarks by
<a href="https://www.fourmilab.ch">John Walker</a> (Fourmilab). </dd>
<dd> <ul>
<li> Website: <a href="https://www.fourmilab.ch/fbench/">fourmilab.sh/fbench</a> </li>
</ul> </dd>
<dt>FLiT: Floating-point Litmus Tests</dt>
<dd> FLiT is a test infrastructure for detecting variability in floating-point
code due to differences in compilers and execution environments. It is currently
developed and maintained by Michael Bentley at the University of Utah. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1109/IISWC.2017.8167780">"FLiT: Cross-Platform Floating-Point Result-Consistency Tester and Workload"</a> (IISWC'17) </li>
<li> Website: <a href="https://pruners.github.io/flit/">pruners.github.io/flit</a> </li>
<li> Github: <a href="https://github.com/PRUNERS/FLiT">PRUNERS/FLiT</a> </li>
</ul> </dd>
<dt>FPBench</dt>
<dd> FPBench is a community benchmark project for floating-point examples. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1007/978-3-319-54292-8_6">"Toward a Standard Benchmark Format and Suite for Floating-Point Analysis"</a> (NSV'16) </li>
<li> Website: <a href="https://fpbench.org">fpbench.org</a> </li>
<li> Source: <a href="https://github.com/FPBench/FPBench">github.com/FPBench/FPBench</a> </li>
</ul> </dd>
<dt>FPHPC Bechmark Suite</dt>
<dd> This is a collection of programs containing floating-point arithmetic
with the immediate goal of aiding the development and evaluation of
floating-point precision and correctness analysis tools. The long-term goal
is to support and enable the scaling of such tools to provide insights for
high-performance computing (HPC) applications. This collection aims to
address the "middle ground" between real-valued expressions (such as those
contained in the FPBench collection) and full HPC applications. </dd>
<dd> <ul>
<li> Source: <a href="https://github.com/crafthpc/fphpc">github.com/crafthpc/fphpc</a> </li>
</ul> </dd>
<dt>FPVM</dt>
<dd> FPVM is a floating-point virtual machine proposed and prototyped by
Peter Dinda and others at Northwestern University. The goal is to allow an
existing application binary to be extended to support alternative
arithmetic systems. The current implementation combines static binary
analysis with trap-and-emulate execution. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/3502181.3531469">"FPVM: Towards a Floating Point Virtual Machine"</a> (HPDC'22)</li>
</ul> </dd>
<dt>KLEE</dt>
<dd> KLEE is a symbolic execution virtual machine built on LLVM with a
benchmark suite of example floating-point programs. There is also a
<a href="https://github.com/srg-imperial/klee-float">fork</a>
that supports reasoning over floating-point constraints. </dd>
<dd> <ul>
<li> Paper: <a href="https://dl.acm.org/citation.cfm?id=1855756">"KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs"</a> (OSDI'08) </li>
<li> Website: <a href="https://klee-se.org">klee-se.org</a> </li>
<li> Source: <a href="https://github.com/ApproxSymate/klee">github.com/ApproxSymate/klee</a> (<a href="https://github.com/ApproxSymate/fp-examples">examples</a>) </li>
</ul> </dd>
<dt>Posit libraries</dt>
<dd> Posits are the newest iteration on
<a href="http://johngustafson.net">John Gustafson's</a> Unum representation
(see below). Unlike Unums, Posits have a fixed overall width but the number
of exponent and significand bits vary according to a run-length-encoded
"regime" field. It provides higher accuracy and a greater dynamic range than
identically-sized IEEE formats. </dd>
<dd> <ul>
<li> Website: <a href="https://posithub.org">posithub.org</a> (Centre for Next-Generation Arithmetic and SoftPosit) </li>
<li> Paper: <a href="http://johngustafson.net/pdfs/BeatingFloatingPoint.pdf">"Beating Floating Point at its Own Game: Posit Arithmetic"</a> </li>
<li> Table Generator: <a href="https://www.posithub.org/widget/lookup">posithub.org/widget/lookup</a> </li>
<li> Source (Julia/C/C++ library): <a href="https://github.com/Etaphase/FastSigmoids.jl">github.com/Etaphase/FastSigmoids.jl</a>
(courtesy of <a href="https://github.com/ityonemo">Isaac Yonemoto</a>, <a href="https://github.com/interplanetary-robot">IREBC</a>) </li>
<li> Source (C/C++ library): <a href="https://github.com/libcg/bfp">github.com/libcg/bfp</a>
(courtesy of <a href="https://github.com/libcg">Clément Guérin</a>)</li>
<li> Source (C++ template library): <a href="https://github.com/stillwater-sc/universal">github.com/stillwater-sc/universal</a> </li>
</ul> </dd>
<dt>S3FP</dt>
<dd> Another project from the University of Utah (see FPTaylor and FPTuner
above), S3FP is a tool for finding floating-point inputs that will maximize
error. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/2555243.2555265">"Efficient Search for Inputs Causing High Floating-point Errors"</a> (PPoPP'14) </li>
<li> Source: <a href="https://github.com/soarlab/S3FP">github.com/soarlab/S3FP</a> </li>
</ul> </dd>
<dt>SOAP</dt>
<dd> SOAP (Structural Optimisation of Arithmetic Programs) is a tool for
optimizing FPGA-based implementations of floating-point code. </dd>
<dd> <ul>
<li> Source: <a href="https://github.com/admk/soap">github.com/admk/soap</a> </li>
</ul> </dd>
<dt>Sollya</dt>
<dd>Sollya is a library and environment for safe floating-point
development, specifically intended to aid the development of math
library functions.</dd>
<dd> <ul>
<li>Website: <a href="https://www.sollya.org">sollya.org</a></li>
</dd> </ul>
<dt>Unum library</dt>
<dd> Unums ("universal numbers") are a novel binary representation for real
numbers by <a href="http://johngustafson.net">John Gustafson</a> that exactly
quantifies the uncertainty of a stored value. It extends IEEE floating-point by
making both the exponent and signficand fields variable-sized, and it adds an
"exact" bit that is set if the number is exact and unset if the number
represents a range between representable numbers. It provides a way to obtain
mathematically-rigorous results in numeric code at the cost of computational
efficiency. </dd>
<dd> <ul>
<li> Website: <a href="http://www.johngustafson.net/unums.html">johngustafson.net/unums.html</a> </li>
<li> Source (C/C++ library): <a href="https://github.com/LLNL/unum">github.com/LLNL/unum</a>
(courtesy of <a href="https://people.llnl.gov/lloyd23">Scott Lloyd</a>, <a href="https://www.llnl.gov">LLNL</a>)</li>
</ul> </dd>
<dt>VFLOAT</dt>
<dd> VFLOAT is a a variable-precision floating-point library for FPGAs, developed
and maintained by <a href="https://coe.northeastern.edu/Research/rcl/members/MEL/index.html">Miriam Leeser</a>
at Northeastern University. </dd>
<dd> <ul>
<li> Paper: <a href="https://doi.org/10.1145/1839480.1839486">"VFloat: A Variable Precision Fixed- and Floating-Point Library for Reconfigurable Hardware"</a> (TRETS'10) </li>
<li> Paper: <a href="https://doi.org/10.1145/2851507">"Open-Source Variable-Precision Floating-Point Library for Major Commercial FPGAs"</a> (TRETS'16) </li>
<li> Website: <a href="https://coe.northeastern.edu/Research/rcl/projects/floatingpoint/index.html">Link</a> </li>
</ul> </dd>
<!-- TODO: include this?
<dt><a href="https://bitbucket.org/icl/bonsai">BONSAI</a></dt>
<dd>BONSAI tests and tunes computational kernels.</dd>
-->
</dl>
<p>This list is primarily maintained for the research community by
<a href="https://w3.cs.jmu.edu/lam2mo/">Mike Lam</a>.
If you are developing or are maintaining a floating-point tool, or know of a
tool that isn't on the list, please
<a href="mailto:fpbench@cs.washington.edu">email the list</a>.</p>
</body>
</html>