-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
933 lines (877 loc) · 48 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<!-- Meta tags for social media banners, these should be filled in appropriatly as they are your "business card" -->
<!-- Replace the content tag with appropriate information -->
<meta name="description" content="MagicDec-part2: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding">
<meta property="og:title" content="Magicdec"/>
<meta property="og:description" content="MagicDec-part2: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding"/>
<meta property="og:url" content="https://github.com/Infini-AI-Lab/MagicDec/"/>
<!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X630-->
<meta property="og:image" content="static/images/icons/MagicDec.png"/>
<meta property="og:image:width" content="1200"/>
<meta property="og:image:height" content="630"/>
<meta name="twitter:title" content="MagicDec">
<meta name="twitter:description" content="MagicDec-part2: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding">
<!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X600-->
<meta name="twitter:image" content="static/images/icons/MagicDec.png">
<meta name="twitter:card" content="summary_large_image">
<!-- Keywords for your paper to be indexed by-->
<meta name="keywords" content="Speculative Decoding">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>MagicDec-part2: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding</title>
<link rel="icon" type="image/x-icon" href="static/images/icons/MagicDec.png">
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="static/css/bulma.min.css">
<link rel="stylesheet" href="static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="static/css/bulma-slider.min.css">
<link rel="stylesheet" href="static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="static/css/index.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://documentcloud.adobe.com/view-sdk/main.js"></script>
<script defer src="static/js/fontawesome.all.min.js"></script>
<script src="static/js/bulma-carousel.min.js"></script>
<script src="static/js/bulma-slider.min.js"></script>
<script src="static/js/index.js"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});
</script>
<script type="text/javascript"
src="http://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
<style>
@font-face {
font-family: 'TriForceFont';
src: url('static/Triforce.ttf') format('truetype');
}
.custom-font {
font-family: 'TriForceFont', sans-serif !important;
font-size: 3.0rem;
}
body {
background-color: #f5f5f5; /* Adjust this to match your page's gray color */
}
.image-container {
background-color: #f5f5f5; /* Same as body background */
display: inline-block; /* Or 'block' depending on your layout needs */
}
.image-container img {
mix-blend-mode: multiply;
max-width: 100%;
height: auto;
}
.container.is-fluid {
margin-left: 15px;
margin-right: 15px;
max-width: none;
}
.hero .hero-body {
padding: 3rem 0;
}
.section {
padding: 3rem 0;
}
.column.is-full-width {
padding: 0 15px;
}
</style>
</head>
<body>
<!-- Section: Header Titlepage -->
<section class="hero">
<div class="hero-body">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths has-text-centered">
<img src="static/images/icons/MagicDec.png" alt="Magic Wand Icon" style="display: inline; height: 3rem; vertical-align: top;">
<h1 class="title is-2 publication-title" style="display: inline;">MagicDec-part2: Breaking the Latency-Throughput Tradeoff for Long Contexts with Speculative Decoding</h1>
<br><br>
<div class="is-size-5 publication-authors">
<span class="author-block"><a href="" target="_blank">Jian Chen</a><sup>*1</sup>,</span>
<span class="author-block"><a href="" target="_blank">Vashisth Tiwari</a><sup>*1</sup>,</span>
<span class="author-block"><a href="" target="_blank">Ranajoy Sadhukhan</a><sup>*1</sup>,</span>
<span class="author-block"><a href="https://dreaming-panda.github.io/" target="_blank">Zhuoming Chen</a><sup>1</sup>,</span>
<br>
<span class="author-block"><a href="" target="_blank">Jinyuan Shi</a><sup>2</sup></span>
<span class="author-block"><a href="" target="_blank">Ian En-Hsu Yen</a><sup>2</sup>,</span>
<span class="author-block"><a href="https://www.andrew.cmu.edu/user/beidic/" target="_blank">Beidi Chen</a><sup>1,3</sup></span>
</div>
<div class="is-size-5 publication-authors">
<span class="affliation">
<small>
<sup>1</sup>Carnegie Mellon University
<sup>2</sup>Moffett AI
<sup>3</sup>Meta AI (FAIR)
</small>
</span>
<span class="eql-cntrb">
<small><br><sup>*</sup>Indicates Equal Contribution</small>
</span>
</div>
<div class="column has-text-centered">
<span class="link-block">
<a href="https://arxiv.org/abs/2408.11049" target="_blank" class="external-link button is-normal is-rounded is-dark">
<span class="icon"><i class="ai ai-arxiv"></i></span>
<span>arXiv</span>
</a>
</span>
<span class="link-block">
<a href="https://github.com/Infini-AI-Lab/MagicDec/tree/main" target="_blank" class="external-link button is-normal is-rounded is-dark">
<span class="icon"><i class="fab fa-github"></i></span>
<span>Code</span>
</a>
</span>
<!-- <span class="link-block">
<a href="https://youtu.be/vRAaAyjr6Jo" target="_blank" class="external-link button is-normal is-rounded is-dark">
<span class="icon"><i class="fab fa-youtube"></i></span>
<span>Video</span>
</a>
</span> -->
</div>
</div>
</div>
</div>
</div>
</section>
<!-- Section: Paper abstract -->
<section class="section hero is-light">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified">
<p>
<strong style="font-weight: 900;color: #0f598a">TL;DR:</strong> We introduce <strong>MagicDec</strong>, which uses Speculative Decoding to improve both throughput and latency of LLM inference. Our work identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy Speculative Decoding more effectively for high throughput inference. It challenges the existing wisdom regarding the inefficacy of Speculative Decoding for large batch sizes. More interestingly, we observe an <strong>improvement in speedup with increasing batch size</strong> for moderate to long sequences. Our work theoretically motivates and empirically validates why Speculative Decoding is a potential solution for breaking throughput-latency tradeoff, when used wisely.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section: Paper abstract -->
<section class="section hero is-light">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="text-align: center;">
<img src="static/images/icons/Llama.png" style="height: 43px; display: inline; vertical-align:text-top;"/>
Introduction
</h2>
<div class="content has-text-justified">
<p>
With an emergence of long-context models and a growing popularity of applications like interactive chat-bots, efficiently serving long context requests has gained significant attention. Concurrently, for commercial use-cases, high-throughput serving directly translates to increased revenue. However, achieving both low latency and high throughput poses a challenge as the two are often at odds with each other. While Speculative Decoding techniques have been proposed to improve latency at the cost of throughput, different continuous batching techniques have prioritized throughput over latency. Quantization and pruning based methods can achieve both high throughput and latency, but often sacrifice performance. Given these trade-offs, we pose the following question:
<blockquote>
<strong>Can we achieve high throughput and low latency for long-context LLM generation without sacrificing performance?</strong>
</blockquote>
</p>
<p>
To answer this question, we revisit the efficacies of Speculative Decoding through an analytical approach. Existing works (e.g., <a style="color: #209CEE" href="https://arxiv.org/abs/2406.14066">Liu et al, 2024</a>; <a style="color: #209CEE" href="https://arxiv.org/abs/2310.18813">Su et al, 2023</a>) have claimed that although useful for small batch sizes, Speculative Decoding can become counter-productive for serving large batches. This inefficiency stems from two primary factors:
<ol>
<li>For small batch sizes, Speculative Decoding tries to use the underutilized compute resources through parallel verification of speculated tokens, but for large batch sizes, the compute resources are already well-utilized, diminishing the potential benefits.</li>
<li>If the draft model is not capable enough, the low acceptance rate can cause the target model to spend more time verifying tokens that are ultimately rejected.</li>
</ol>
</p>
<h3 class="title is-size-5" style="text-align: left;">
<img src="static/images/icons/revisit.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Revisiting Speculative Decoding for High Throughput
</h3>
<p>
In this blogpost, we look into these claims more carefully and try to find way-arounds. With suitable drafting strategies,
<ol>
<strong>
<li>We are able to find speedup in the high throughput regime 😎</li>
<li>Furthermore, we can achieve increased speedup with increasing batch size, meaning higher throughput at higher latency budget 🤩</li>
</strong>
</ol>
beyond a critical sequence length dependent on model type and hardware specifications. The higher is the context length, the better is the improvement in throughput. </p>
<div style="display: flex; flex-wrap: wrap; justify-content: center;">
<div style="width: 45%; min-width: 150px; margin: 5px;">
<canvas id="chart1"></canvas>
</div>
<div style="width: 45%; min-width: 150px; margin: 5px;">
<canvas id="chart2"></canvas>
</div>
</div>
<script src="static/js/plots/throughput_latency_smaller.js"></script>
<br>
<p>
<!-- We note that our sparse KV cache based drafting is absolutely key in high batch size and sequence length regime. To retain a high acceptance rate even for longer contexts, we rely on a simple but effective KV sparsification strategy called <a style="color: #209CEE" href="https://arxiv.org/abs/2309.17453">StreamingLLM</a>. -->
The key observation is that, for every model and hardware pair, there exists a <i>critical sequence length</i> beyond which LLM inference becomes memory bound even for large batch sizes. This is because, for long enough sequences, the KV cache becomes the primary memory bottleneck and unlike model parameters, it scales with batch size. Although computation time also increases linearly with batch size, the increase in KV loading time dominates. We use this insight to -
<ol><strong>
<li>Design a draft model with <a style="color: #209CEE">sparse KV cache</a>.</li>
<li>Exploit the low verification to autoregressive decoding latency ratio even at large batch sizes.</li>
</strong>
</ol>
</p>
<p>
KV sparsification for drafting is absolutely important because otherwise, in a high batch size and sequence length regime, the draft to target cost ratio will be decided by the draft to target memory footprint ratio only. For example, when we speculate Llama-3.1-70B model with a Llama-3.1-8B draft, the draft to target memory loading ratio goes up to 0.4 and stays constant at larger batch sizes, as can be seen in <a style="color: #0b3c5d" href="#figure-4">Figure 4</a>. However, with a fixed KV budget for the draft model, draft latency increases very slowly with batch size. Consequently, the draft to target cost ratio keeps on decreasing for larger batches, resulting in a higher speed-up.
</p>
<p>
The fixed draft KV budget naturally raises concern about the acceptance rate, which is another important deciding factor. Fortunately, StreamingLLM with a small KV budget like 512 is able to retain a high acceptance rate even for sequences as long as 100K! To further improve the acceptance rate, we can increase the drafting budget in different ways as long as the draft to target cost ratio remains reasonably small. Contrary to existing beliefs (e.g., <a style="color: #209CEE" href="https://arxiv.org/abs/2406.14066">Liu et al, 2024</a>; <a style="color: #209CEE" href="https://arxiv.org/abs/2310.18813">Su et al, 2023</a>), this high acceptance rate allows us to use longer speculation length for larger batches. And thus, as you guessed, we can keep on getting better speedup with increasing batch size 💪.
<br>
<br>
Seems like Magic? Well, let's get into the details of MagicDec! 🧙♂️
<!-- <a style="color: #209CEE" href="https://arxiv.org/abs/2404.11912" target="_blank">paper</a>. -->
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section: Motivation -->
<section class="section hero is-light">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="text-align: center;">
<img src="static/images/icons/Idea.png" style="height: 50px; display: inline; vertical-align: middle;"/>
Motivation
</h2>
<div class="content has-text-justified">
<!-- <p>
Speculative decoding leverages underutilized GPU compute during memory-bound autoregressive decoding. However, <a href="https://arxiv.org/pdf/2406.14066" rel="external nofollow noopener" target="_blank" style="color: #209CEE;">prior research</a> has shown that Speculative Decoding becomes less effective as batch sizes increase and exhaust the available compute. These challenges have led to Speculative Decoding being avoided in processing large batches. However, we show that for long sequences, a shift in LLM operation bottleneck allows Speculative Decoding to become more effective with larger batch sizes.
</p> -->
<h4 class="title is-5">
<img src="static/images/icons/MathematicsCompass.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Analytical Breakdown of Speedup in Speculative Decoding
</h4>
<p>Given a speculation length γ for a sequence of length S and batch size B, let T<sub>T</sub>(B, S, 1) and T<sub>D</sub>(B, S, 1) denote the target and draft decoding latencies, respectively. The verification time, T<sub>V</sub>(B, S, γ), is the time it takes the target model to verify the γ speculated tokens in a single forward pass. Given an acceptance rate α ∈ [0, 1] and speculation length γ, Ω(γ, α) represents the expected number of tokens generated in one verification step, as described by <a style="color: #209CEE" href="https://arxiv.org/abs/2211.17192">Leviathan et al.</a>:</p>
<div style="text-align: center;">
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<mrow>
<mi>Ω</mi>
<mo stretchy="false">(</mo>
<mi>γ</mi>
<mo>,</mo>
<mi>α</mi>
<mo stretchy="false">)</mo>
<mo>:=</mo>
<mi>𝔼</mi>
<mo>[</mo>
<mo>#</mo>
<mtext> generated tokens</mtext>
<mo>]</mo>
<mo>=</mo>
<mfrac>
<mrow>
<mn>1</mn>
<mo>-</mo>
<msup>
<mi>α</mi>
<mrow>
<mi>γ</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mi>α</mi>
</mrow>
</mfrac>
</mrow>
</math>
</div>
<p>The total time taken for Speculative Decoding, T<sup>SD</sup><sub>Total</sub>, is given by:</p>
<div style="text-align: center;">
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<mrow>
<msubsup>
<mi>T</mi>
<mtext>Total</mtext>
<mtext>SD</mtext>
</msubsup>
<mo>=</mo>
<mi>γ</mi>
<mo>⋅</mo>
<msub>
<mi>T</mi>
<mi>D</mi>
</msub>
<mo stretchy="false">(</mo>
<mi>B</mi>
<mo>,</mo>
<mi>S</mi>
<mo>,</mo>
<mn>1</mn>
<mo stretchy="false">)</mo>
<mo>+</mo>
<msub>
<mi>T</mi>
<mi>V</mi>
</msub>
<mo stretchy="false">(</mo>
<mi>B</mi>
<mo>,</mo>
<mi>S</mi>
<mo>,</mo>
<mi>γ</mi>
<mo stretchy="false">)</mo>
</mrow>
</math>
</div>
<p>The expected per-token latency for Speculative Decoding is simply:</p>
<div style="text-align: center;">
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<mrow>
<msubsup>
<mi>T</mi>
<mtext>Avg</mtext>
<mtext>SD</mtext>
</msubsup>
<mo>=</mo>
<mfrac>
<msubsup>
<mi>T</mi>
<mtext>Total</mtext>
<mtext>SD</mtext>
</msubsup>
<mrow>
<mi>Ω</mi>
<mo>(</mo>
<mi>γ</mi>
<mo>,</mo>
<mi>α</mi>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
</math>
</div>
<p>The ratio of Speculative Decoding latency to regular autoregressive decoding latency can be broken down as:</p>
<div id="eq-1" style="text-align: center; position: relative;">
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<mrow>
<mfrac>
<mrow>
<msubsup>
<mi>T</mi>
<mtext>Avg</mtext>
<mtext>SD</mtext>
</msubsup>
</mrow>
<mrow>
<msub>
<mi>T</mi>
<mi>T</mi>
</msub>
</mrow>
</mfrac>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>Ω</mi>
<mo stretchy="true">(</mo>
<mi>γ</mi>
<mo>,</mo>
<mi>α</mi>
<mo stretchy="true">)</mo>
</mrow>
</mfrac>
<mo stretchy="true">(</mo>
<mfrac>
<mrow>
<mi>γ</mi>
<mo>⋅</mo>
<msub>
<mi>T</mi>
<mi>D</mi>
</msub>
</mrow>
<msub>
<mi>T</mi>
<mi>T</mi>
</msub>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>T</mi>
<mi>V</mi>
</msub>
<mo stretchy="true">(</mo>
<mi>γ</mi>
<mo stretchy="true">)</mo>
</mrow>
<msub>
<mi>T</mi>
<mi>T</mi>
</msub>
</mfrac>
<mo stretchy="true">)</mo>
</mrow>
</math>
<div style="position: absolute; right: 0; bottom: 0;">
(1)
</div>
</div>
<p>We shall come back to this equation shortly.
</p>
<h4 class="title is-5">
<img src="static/images/icons/Switch.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Shift in Bottleneck from Compute to Memory
</h4>
<p>
It is well understood that Speculative Decoding is beneficial when LLM inference is memory-bounded. The following two observations are key to understanding why Speculative Decoding is useful even for large batches:
<ol>
<li>
For every model and hardware pair, there exists a critical sequence length beyond which the model becomes memory bound even for large batch sizes. We illustrate this theoretically in <a style="color: #0b3c5d" href="#figure-2">Figure 2-a</a>.
</li>
<li>
For moderate to long sequences, LLM inference becomes more and more memory-bound with increasing batch size!! We illustrate this behavior theoretically in <a style="color: #0b3c5d" href="#figure-2">Figure 2-b</a>.
</li>
</ol>
</p>
<div id="figure-2" class="image-container" style="display: flex; flex-direction: column; align-items: center;">
<div style="display: flex; justify-content: space-around; width: 100%;">
<img src="static/images/arithmetic_intensity.png" alt="Arithmetic Intensity" style="height: 400px; width: 400px; margin: 0 10px;" />
<img src="static/images/kvloadtime.png" alt="KV Load Time" style="height: 400px; width: 400px; margin: 0 10px;" />
</div>
<figcaption style="margin-top: 10px; text-align: center;">
<strong>Figure 2: (a) Theoretical Arithmetic Intensity vs prefill length for different kinds of GPUs</strong>
(LLaMA-2-7B and LLaMA-3.1-8B models, Batch Size 256) on the right
<br>
<strong>(b) KV, weights, activations, and compute times for different batch sizes </strong>(Self Speculation LLaMA-2-7B, Prefill length 4000) on the left
</figcaption>
</div>
<br>
<p>
The critical sequence length is dependent on model type and hardware specifications. As can be seen in <a style="color: #0b3c5d" href="#figure-2">Figure 2-a</a>, for GQA type models, the critical sequence length is expected to be higher due to their relatively lighter KV memory footprint. On the other hand, for GPUs with higher <i>peak_flops / memory_bandwidth</i> ratio, the critical point arrives early.
<br><br>
For moderately small sequence lengths and small batch sizes, LLM inference is essentially bottlenecked by parameter loading time. As batch size increases, the arithmetic intensity of the linear projections increases, making the model compute-bound. On the other hand, with increasing sequence length, KV loading time starts to become the primary memory bottleneck.
Interestingly, unlike model parameters, KV cache size increases linearly with batch size. Hence, once the KV cache becomes larger than the model memory footprint, both the memory loading time and computation time increase linearly with batch size. However, modern accelerators have much higher peak FLOPs per second than memory bandwidth. Consequently, the increase in memory loading time is more than the increase in computation time. This makes LLM inference more memory-bound in the high-throughput regime when the sequence length is long enough.
</p>
<br>
<h4 class="title is-5">
<img src="static/images/icons/Stonks_emoji.png" style="height: 36px; display: inline; vertical-align: middle;"/>
But how can we achieve Higher Speedup with Bigger Batches?
</h4>
<p>
Now we dig deeper into the speedup breakdown in <a style="color: #0b3c5d" href="#eq-1">Eq. 1</a> to understand how we can achieve higher speedup with bigger batches:
<ol>
<li>
<strong><i>Verification of speculated tokens should not incur much extra cost compared to autoregressive decoding.</i></strong> In <a style="color: #0b3c5d" href="#figure-3">Figure 3</a>, we can see that for longer sequences, the target verification to decoding cost ratio remains close to 1 i.e. (T<sub>V</sub> / T<sub>T</sub> ≈ 1) even for super large batch sizes.
</li>
<li>
<strong><i>A low draft to target cost ratio is preferred for higher speedup.
</i></strong> In <a style="color: #0b3c5d" href="#figure-5">Figure 5</a>, we can see that this ratio goes down with increasing batch size i.e. (T<sub>D</sub> / T<sub>T</sub> → 0).
</li>
<li>
<strong><i>The acceptance should ideally be close to 1.</i></strong> This quantity is not expected to change with batch size due to the i.i.d assumption for the batched sequences. Nevertheless, it is important to allow long speculation length.
</li>
</ol>
</p>
<h4 class="title is-5">
<img src="static/images/Verification.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Verification vs Target Latency
</h4>
<div style="display: flex; align-items: top; gap: 10px;">
<div style="flex: 1;">
<p>
With increasing batch size, verification of multiple tokens is expected to be significantly costlier than decoding as the computation time increases linearly with speculation length. However, when KV loading becomes the dominant bottleneck, the increase in KV loading time dominates the increase in computation time, restricting the verification to decoding cost ratio. As illustrated in <a style="color: #0b3c5d" href="#figure-3">Figure 3</a>, the cost ratio actually saturates to a value close to 1, especially for longer sequences.
</p>
<div id="figure-3" class="image-container" style="display: flex; flex-direction: column; align-items: center;">
<div style="display: flex; justify-content: space-around; width: 100%;">
<img src="static/images/ver_2_reg_llama2.png" alt="Ratio of Verification vs Target Time" style="height: 400px; width: 400px; margin: 0 10px;" />
<img src="static/images/ver_2_reg_llama3.png" alt="Ratio of Verification vs Target Time" style="height: 400px; width: 400px; margin: 0 10px;" />
</div>
<figcaption style="margin-top: 10px; text-align: center;">
<strong>Figure 3: Theoretical Ratio of Verification (speculation length=4) to Target Time vs Batch Size</strong>
(left: LLaMA-2-7B, right: LLaMA-3.1-8B)
</figcaption>
</div>
</div>
</div>
<br>
<h4 class="title is-5">
<img src="static/images/icons/Drafting.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Draft vs Target Latency
</h4>
<p>
We choose to use drafts with a fixed small KV budget for speculation to address the KV bottleneck. Otherwise in a high batch size and sequence length regime, the draft to target cost ratio will be decided by the draft to target memory loading ratio only. From <a style="color: #0b3c5d" href="#figure-4">Figure 4</a>., we can see that this ratio increases with batch size and then saturates for both Llama2-7B/Llama2-70B and Llama-3.1-8B/Llama-3.1-70B draft-target pairs. To improve speedup with batchsize, we want this ratio to go down instead.
<br>
Taking inspiration from <a href="https://arxiv.org/pdf/2404.11912" rel="external nofollow noopener" target="_blank" style="color: #209CEE;">Triforce</a>, we fix a small KV budget for the draft model such that the draft cost does not increase sharply with batchsize. As a result, the draft to target cost ratio decreases with increasing batch size, as shown in <a style="color: #0b3c5d" href="#figure-5">Figure 5</a>. Furthermore, given a fixed KV budget, the ratio naturally goes down more rapidly for longer sequence lengths.
<!-- <br>
We use StreamingLLM to draft tokens, which is particularly effective because of small KV loading time. -->
<div id="figure-4" class="image-container" style="display: flex; flex-direction: column; align-items: center;">
<div style="display: flex; justify-content: space-around; width: 100%;">
<img src="static/images/draft_to_target_mem_ratio_llama2.png" alt="Ratio of Draft vs Target Memory Loading Llama2-70b" style="height: 400px; width: 400px; margin: 0 10px;" />
<img src="static/images/draft_to_target_mem_ratio_llama3.png" alt="Ratio of Draft vs Target Time Llama3-70b" style="height: 400px; width: 400px; margin: 0 10px;" />
</div>
<figcaption style="margin-top: 10px; text-align: center;">
<strong>Figure 4: Theoretical Ratio of Draft (Full Cache) to Target Memory Loading vs Batch Size</strong>
(left: LlaMA-2-7B/Llama-2-70B, right: LlaMA-3.1-8B/Llama-3.1-70B)
</figcaption>
</div>
<br>
<div id="figure-5" class="image-container" style="display: flex; flex-direction: column; align-items: center;">
<div style="display: flex; justify-content: space-around; width: 100%;">
<img src="static/images/draft_to_target_llama2.png" alt="Ratio of Draft vs Target Time" style="height: 400px; width: 400px; margin: 0 10px;" />
<img src="static/images/draft_to_target_llama3.png" alt="Ratio of Draft vs Target Time" style="height: 400px; width: 400px; margin: 0 10px;" />
</div>
<figcaption style="margin-top: 10px; text-align: center;">
<strong>Figure 5: Theoretical Ratio of Draft (StreamingLLM budget=256) to Target Time vs Batch Size</strong>
(left: LlaMA-2-7B, right: LlaMA-3.1-8B)
</figcaption>
</div>
</p>
<h4>
<img src="static/images/icons/choice_of_eggs.png" style="height: 36px; display: inline; vertical-align: middle;"/>
Choice of Drafts
</h4>
<p>
To ensure a high acceptance rate for our draft with sparse KV, we use the StreamingLLM method. Interestingly, we find this simple method to be quite effective even for very long sequences. On PG-19 dataset, we find the acceptance rate of Llama-3.1-8B self-speculation with a streamingLLM budget of 512 to be relatively stable for sequence lengths ranging from 4000 to 100000.
</p>
<div style="width: 800px; margin: 0 auto;">
<table cellpadding="5" cellspacing="0" style="width: 100%; border-collapse: collapse;">
<thead>
<tr style="border-bottom: 1px solid black;">
<th>Sequence Length</th>
<th>4000</th>
<th>8000</th>
<th>16000</th>
<th>32000</th>
<th>64000</th>
<th>100000</th>
</tr>
</thead>
<tbody>
<tr>
<td>Acceptance Rate</td>
<td>0.84</td>
<td>0.83</td>
<td>0.82</td>
<td>0.81</td>
<td>0.79</td>
<td>0.79</td>
</tr>
</tbody>
</table>
</div>
<br>
<p>
With a high acceptance rate and a reasonable verification cost, we can afford to use a longer speculation length. Contrary to previous works (e.g., <a style="color: #209CEE" href="https://arxiv.org/abs/2406.14066">Liu et al, 2024</a>; <a style="color: #209CEE" href="https://arxiv.org/abs/2310.18813">Su et al, 2023</a>), we observe that the optimal speculation length goes up with increasing batch size.
</p>
</div>
</div>
</div>
</div>
</div>
</section>
<section>
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="text-align: center;">
<img src="static/images/icons/experiments.png" style="height: 50px; display: inline; vertical-align: middle;"/>
Experiments
</h2>
<div class="content has-text-justified">
<p>
We evaluate the speed-up of MagicDec on Llama-2-7B-32K and LlaMA-3.1-8B-128K at different batch sizes and sequence lengths. The test sequences are sampled from the PG-19 dataset. For drafting, we have explored the following options:
<ol>
<li><strong>Standalone tiny GQA model: </strong>For the Llama-2-7B-32K target model, we have used TinyLlama-1.1B for drafting. There are not many suitable small draft models for LlaMA-3.1-8B-128K with matching vocab size.
</li>
<li><strong>Self-speculation: </strong>StreamingLLM based self-speculation is used for speculating LlaMA-2-7B-32K and LlaMA-3.1-8B-128K models.
</li>
</ol>
For all draft models we experimented with streamingLLM KV cache of different budgets.
<br>
The experiments are conducted on 8 NVIDIA A100 gpus with 8-way tensor parallelism. We also tested on higher-end GPUs like NVIDIA H100_SXM and lower-cost alternatives like NVIDIA L40 gpus.
<br>
<br>
<div id="figure-6" class="image-container">
<img src="static/images/speedups_budget512.png" alt="Results" height="350" />
<figcaption style="font-size: 0.9em;">
<!-- <strong>Figure 1: End-to-end Speculative Decoding Speedups for Various Target-Draft pairs on 8 NVIDIA A100s.</strong> -->
<strong>
Figure 6: End-to-end Speculative Decoding Speedups (with optimal speculation length) on 8 NVIDIA A100s. </strong> (a) TinyLLaMA-1.1B Draft with LLaMA-2-7B-32K Target, (b) LLaMA-2-7B-32K with Self-Speculation, (c) LLaMA-3.1-8B with Self-Speculation.
</figcaption>
</div>
<br>
<br>
The empirical results clearly show an increasing trend in speedup for larger batch size. Furthermore, it is interesting to see that the optimal speculation length also increases with increasing batch size.
<br>
For the GQA target model, the speed-up is expected to be lower because of better arithmetic intensity. Regardless, MagicDec is still able to achieve better speedups for larger batches.
<br><br>
<style>
table {
border-collapse: collapse;
width: 80%;
margin-left: auto;
margin-right: auto;
}
th, td {
text-align: left;
padding: 12px;
}
tr td {
border-bottom: 1px solid black;
}
th {
border-bottom: 2px solid black;
}
/* Fix for table1 */
#table1 tr:nth-child(4) td,
#table1 tr:nth-child(8) td {
border-bottom: 2px solid black;
}
/* Fix for table2 */
#table2 tr:nth-child(3) td,
#table2 tr:nth-child(6) td {
border-bottom: 2px solid black;
}
</style>
<table id="table1">
<caption style="caption-side: top; text-align: center; font-weight: bold; margin-bottom: 10px;">
End-to-end Speculative Decoding Speedups for Various Target-Draft pairs on 8xA100s
</caption>
<tr>
<th>Target</th>
<th>Draft</th>
<th>Prefill</th>
<th>Batch Size</th>
<th>Optimal Spec Len</th>
<th>Speedup</th>
</tr>
<tr>
<td>Llama3.1-8B</td>
<td>Selfspec</td>
<td>32000</td>
<td>32</td>
<td>3</td>
<td>1.22</td>
</tr>
<tr>
<td>Llama3.1-8B</td>
<td>Selfspec</td>
<td>32000</td>
<td>64</td>
<td>3</td>
<td>1.38</td>
</tr>
<tr>
<td>Llama3.1-8B</td>
<td>Selfspec</td>
<td>32000</td>
<td>128</td>
<td>4</td>
<td>1.47</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>32</td>
<td>3</td>
<td>1.18</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>64</td>
<td>3</td>
<td>1.48</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>128</td>
<td>4</td>
<td>1.63</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>32000</td>
<td>32</td>
<td>4</td>
<td>2.0</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>32</td>
<td>3</td>
<td>1.29</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>64</td>
<td>3</td>
<td>1.57</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>128</td>
<td>4</td>
<td>1.66</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>32000</td>
<td>32</td>
<td>4</td>
<td>1.91</td>
</tr>
</table>
<br>
<table id="table2">
<caption style="caption-side: top; text-align: center; font-weight: bold; margin-bottom: 10px;">
End-to-end Speculative Decoding Speedups for Various Target-Draft pairs on 8xH100_SXMs
</caption>
<tr>
<th>Target</th>
<th>Draft</th>
<th>Prefill</th>
<th>Batch Size</th>
<th>Optimal Spec Len</th>
<th>Speedup</th>
</tr>
<tr>
<td>Llama3.1-8B</td>
<td>Selfspec</td>
<td>32000</td>
<td>32</td>
<td>3</td>
<td>1.23</td>
</tr>
<tr>
<td>Llama3.1-8B</td>
<td>Selfspec</td>
<td>32000</td>
<td>64</td>
<td>3</td>
<td>1.43</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>32</td>
<td>3</td>
<td>1.19</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>64</td>
<td>4</td>
<td>1.42</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Selfspec</td>
<td>8000</td>
<td>128</td>
<td>4</td>
<td>1.65</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>32</td>
<td>2</td>
<td>1.32</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>64</td>
<td>3</td>
<td>1.48</td>
</tr>
<tr>
<td>Llama2-7B</td>
<td>Tinyllama1.1B</td>
<td>8000</td>
<td>128</td>
<td>3</td>
<td>1.64</td>
</tr>
</table>
Because H100_SXM has higher peak_flops / memory_bandwidth ratio compared to A100, it helps MagicDec to achieve more speedup on H100_SXM GPUs.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section: Conclusion and Future Work -->
<section class="section hero is-light">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="text-align: center;">
<img src="static/images/icons/Telescope.png" style="height: 50px; display: inline; vertical-align: middle;"/>
Conclusion and Future Work
</h2>
<div class="content has-text-justified">
<p>
This work reassesses the trade-off between throughput and latency in long-context scenarios. We demonstrate that <strong>Speculative Decoding can enhance throughput, reduce latency, and maintain accuracy</strong>. Our theoretical and empirical analysis reveals that as the sequence length and batch size increase, bottlenecks shift from being compute-bound to memory-bound. Furthermore, we exploit the nature of the memory bottleneck to achieve even higher speedups for larger batches.
MagicDec can achieve up to <strong>2x </strong> speedup for LLaMA-2-7B-32K and <strong>1.84x</strong> for LLaMA-3.1-8B on 8 A100 GPUs.
However, the interesting observation about scaling speedup with batch size is definitely promising for high-throughput serving of long-context requests. So, stay tuned for more exciting results in the future!! 🚀
</p>
</div>
<div class="has-text-centered">
<img src="static/images/icons/MagicDec.png" alt="<i>TriForce</i>" width="200" height="200" />
</div>
</div>
</div>
</div>
</section>
<!-- Section: References -->
<section class="section" id="BibTeX">
<!-- <div class="container is-max-desktop content"> -->
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<h2 class="title">BibTeX</h2>
<pre><code>@misc{chen2024magicdecbreakinglatencythroughputtradeoff,
title={MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding},
author={Jian Chen and Vashisth Tiwari and Ranajoy Sadhukhan and Zhuoming Chen and Jinyuan Shi and Ian En-Hsu Yen and Beidi Chen},
year={2024},
eprint={2408.11049},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11049},
}</code></pre>
</div>
</div>
</div>
</section>
<footer class="footer">
<div class="container is-fluid">
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="content">
<p>
This page was built using the <a href="https://github.com/eliahuhorwitz/Academic-project-page-template" target="_blank">Academic Project Page Template</a> which was adopted from the <a href="https://nerfies.github.io" target="_blank">Nerfies</a> project page.
You are free to borrow the of this website, we just ask that you link back to this page in the footer. <br> This website is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
Commons Attribution-ShareAlike 4.0 International License</a>. The icons are created by GPT4.
</p>
</div>
</div>
</div>
</div>
</footer>