Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[UX] Adding examples in jupyter notebook for quantization and bitmask application #34

Draft
wants to merge 2 commits into
base: feature/damian/readme_and_basic_pathways
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
252 changes: 252 additions & 0 deletions examples/bitmask_compression.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,252 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Bitmask Compression Example ##\n",
"\n",
"Bitmask compression allows for storing sparse tensors efficiently on the disk. \n",
"\n",
"Instead of storing each zero element represented as an actual number, we use bitmask to indicate which tensor entries correspond to zero elements. This approach is useful when the matrix is mostly zero values, as it saves space by not wastefully storing those zeros explicitly.\n",
"\n",
"The example below shows how to save and load sparse tensors using bitmask compression. It also demonstrates the benefits of the bitmask compression over \"dense\" representation, and finally, introduces the enhanced `safetensors` file format for storing sparse weights."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import os\n",
"from safetensors import safe_open\n",
"from safetensors.torch import save_model\n",
"from compressed_tensors import save_compressed_model, load_compressed, BitmaskConfig\n",
"from transformers import AutoModelForCausalLM"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LlamaForCausalLM(\n",
" (model): LlamaModel(\n",
" (embed_tokens): Embedding(32000, 768)\n",
" (layers): ModuleList(\n",
" (0-11): 12 x LlamaDecoderLayer(\n",
" (self_attn): LlamaSdpaAttention(\n",
" (q_proj): Linear(in_features=768, out_features=768, bias=False)\n",
" (k_proj): Linear(in_features=768, out_features=768, bias=False)\n",
" (v_proj): Linear(in_features=768, out_features=768, bias=False)\n",
" (o_proj): Linear(in_features=768, out_features=768, bias=False)\n",
" (rotary_emb): LlamaRotaryEmbedding()\n",
" )\n",
" (mlp): LlamaMLP(\n",
" (gate_proj): Linear(in_features=768, out_features=2048, bias=False)\n",
" (up_proj): Linear(in_features=768, out_features=2048, bias=False)\n",
" (down_proj): Linear(in_features=2048, out_features=768, bias=False)\n",
" (act_fn): SiLU()\n",
" )\n",
" (input_layernorm): LlamaRMSNorm()\n",
" (post_attention_layernorm): LlamaRMSNorm()\n",
" )\n",
" )\n",
" (norm): LlamaRMSNorm()\n",
" )\n",
" (lm_head): Linear(in_features=768, out_features=32000, bias=False)\n",
")"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# load a tiny, pruned llama2 model\n",
"model_name = \"neuralmagic/llama2.c-stories110M-pruned50\"\n",
"model = AutoModelForCausalLM.from_pretrained(model_name)\n",
"model"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The example layer model.layers.0.self_attn.q_proj.weight has sparsity 0.50%\n"
]
}
],
"source": [
"# most of the weights of the model are pruned to 50% (except for few layers such as lm_head or embeddings)\n",
"state_dict = model.state_dict()\n",
"state_dict.keys()\n",
"example_layer = \"model.layers.0.self_attn.q_proj.weight\"\n",
"print(f\"The example layer {example_layer} has sparsity {torch.sum(state_dict[example_layer] == 0).item() / state_dict[example_layer].numel():.2f}%\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The model is 31.67% sparse overall\n"
]
}
],
"source": [
"# we can inspect to total sparisity of the state_dict\n",
"total_num_parameters = 0\n",
"total_num_zero_parameters = 0\n",
"for key in state_dict:\n",
" total_num_parameters += state_dict[key].numel()\n",
" total_num_zero_parameters += state_dict[key].eq(0).sum().item()\n",
"print(f\"The model is {total_num_zero_parameters/total_num_parameters*100:.2f}% sparse overall\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Compressing model: 100%|██████████| 111/111 [00:06<00:00, 17.73it/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Size of the model's weights on disk using safetensors: 417.83 MB\n",
"Size of the model's weights on disk using compressed-tensors: 366.82 MB\n",
"The compression ratio is x1.14\n"
]
}
],
"source": [
"# let's save the model on disk using safetensors and compressed-tensors and compare the size on disk\n",
"\n",
"## save the model using safetensors ##\n",
"save_model(model, \"model.safetensors\")\n",
"size_on_disk_mb = os.path.getsize('model.safetensors') / 1024 / 1024\n",
"\n",
"## save the model using compressed-tensors ##\n",
"save_compressed_model(model, \"compressed_model.safetensors\", compression_format=\"sparse-bitmask\")\n",
"compressed_size_on_disk_mb = os.path.getsize('compressed_model.safetensors') / 1024 / 1024\n",
"\n",
"print(f\"Size of the model's weights on disk using safetensors: {size_on_disk_mb:.2f} MB\")\n",
"print(f\"Size of the model's weights on disk using compressed-tensors: {compressed_size_on_disk_mb:.2f} MB\")\n",
"print(\"The compression ratio is x{:.2f}\".format(size_on_disk_mb / compressed_size_on_disk_mb))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Storing weights with around 30% of zero entries requires significantly less disk space when using `compressed-tensors`. The compression ratio improves radically for more sparse models. \n",
"\n",
"We can load back the `state_dict` from the compressed and uncompressed representation on disk and confirm, that they represent same tensors in memory."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Once loaded, the state_dicts from safetensors and compressed-tensors are equal: True\n"
]
}
],
"source": [
"# load the safetensor and the compressed-tensor and show that they have the same representation\n",
"\n",
"## load the uncompressed safetensors to memory ##\n",
"state_dict_1 = {}\n",
"with safe_open('model.safetensors', framework=\"pt\") as f:\n",
" for key in f.keys():\n",
" state_dict_1[key] = f.get_tensor(key)\n",
"\n",
"## load the compressed-tensors to memory ##\n",
"config = BitmaskConfig() # we need to specify the method for decompression\n",
"state_dict_2 = load_compressed(\"compressed_model.safetensors\", config)\n",
"\n",
"tensors_equal = all(torch.equal(state_dict_1[key], state_dict_2[key]) for key in state_dict_1)\n",
"\n",
"print(f\"Once loaded, the state_dicts from safetensors and compressed-tensors are equal: {tensors_equal}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SafeTensors File Format\n",
"\n",
"The reason why the introduced bitmask compression is much more efficient, is imbibing the information about the compression in the header of the `.safetensors` file.\n",
"For each parameter in the uncompressed `state_dict`, we store the following attributes needed for decompression in the compressed `state_dict`:\n",
"\n",
"* Compressed tensor\n",
"* Bitmask\n",
"* Uncompressed shape\n",
"* Row offsets\n",
"\n",
"```bash\n",
"# Dense\n",
"{\n",
" PARAM_NAME: uncompressed_tensor\n",
"}\n",
"\n",
"# Compressed\n",
"{\n",
" PARAM_NAME.compressed: compressed_tensor, # 1d tensor\n",
" PARAM_NAME.bitmask: value, # 2d bitmask tensor (nrows x (ncols / 8))\n",
" PARAM_NAME.shape: value, # Uncompressed shape tensor\n",
" PARAM_NAME.row_offsets: value # 1d offsets tensor\n",
"}\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading