From 208d45bbf7cbde2abfb233e7d10803553fbcf126 Mon Sep 17 00:00:00 2001 From: Jeff Rasley Date: Tue, 22 Mar 2022 21:28:56 -0700 Subject: [PATCH] fix dead MoQ link (#1855) --- docs/_tutorials/MoQ-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/_tutorials/MoQ-tutorial.md b/docs/_tutorials/MoQ-tutorial.md index 8776ee6f3775..587ef4b1ed67 100644 --- a/docs/_tutorials/MoQ-tutorial.md +++ b/docs/_tutorials/MoQ-tutorial.md @@ -3,7 +3,7 @@ title: "DeepSpeed Mixture-of-Quantization (MoQ)" tags: training quantization --- -DeepSpeed introduces new support for model compression using quantization, called Mixture-of-Quantization (MoQ). MoQ is designed on top of QAT (Quantization-Aware Training), with the difference that it schedules various data precisions across the training process. It starts with quantizing the model with a high precision, such as FP16 or 16-bit quantization, and reduce the precision through a pre-defined schedule until reaching the target quantization bits (like 8-bit). Moreover, we use second-order information of the model parameters to dynamically adjust the quantization schedule for each of layer of the network separately. We have seen that by adding such schedule and using various data precision in the training process, we can quantize the model with better quality and preserve accuracy. For a better understanding of MoQ methodology, please refer to MoQ deep-dive, [here](https://www.deepspeed.ai/posts/2021-05-05-MoQ/). +DeepSpeed introduces new support for model compression using quantization, called Mixture-of-Quantization (MoQ). MoQ is designed on top of QAT (Quantization-Aware Training), with the difference that it schedules various data precisions across the training process. It starts with quantizing the model with a high precision, such as FP16 or 16-bit quantization, and reduce the precision through a pre-defined schedule until reaching the target quantization bits (like 8-bit). Moreover, we use second-order information of the model parameters to dynamically adjust the quantization schedule for each of layer of the network separately. We have seen that by adding such schedule and using various data precision in the training process, we can quantize the model with better quality and preserve accuracy. For a better understanding of MoQ methodology, please refer to MoQ deep-dive, [here](https://www.deepspeed.ai/2021/05/04/MoQ.html). Below, we use fine-tune for the GLUE tasks as an illustration of how to use MoQ.