Skip to content

Commit

Permalink
Fix opt_125m_woq_gptq_int4_dq_ggml issue (#1965)
Browse files Browse the repository at this point in the history
Signed-off-by: Kaihui-intel <kaihui.tang@intel.com>
  • Loading branch information
Kaihui-intel authored Aug 6, 2024
1 parent b35ff8f commit b99abae
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion .azure-pipelines/model-test-3x.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ pr:
include:
- neural_compressor/common
- neural_compressor/torch
- examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/llm
- examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/weight_only
- setup.py
- requirements_pt.txt
- .azure-pipelines/scripts/models
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ function run_tuning {
extra_cmd=$extra_cmd" --double_quant_type BNB_NF4"
elif [ "${topology}" = "opt_125m_woq_gptq_int4_dq_ggml" ]; then
model_name_or_path="facebook/opt-125m"
extra_cmd=$extra_cmd" --woq_algo GPTQ --woq_bits 4 --woq_group_size 128 --woq_scheme asym --woq_use_mse_search --gptq_use_max_length --gptq_percdamp 0.1 --gptq_actorder"
extra_cmd=$extra_cmd" --woq_algo GPTQ --woq_bits 4 --woq_group_size 128 --woq_scheme asym --woq_use_mse_search --gptq_use_max_length --gptq_percdamp 0.8 --gptq_actorder"
extra_cmd=$extra_cmd" --double_quant_type GGML_TYPE_Q4_K"
elif [ "${topology}" = "llama2_7b_gptq_int4" ]; then
model_name_or_path="meta-llama/Llama-2-7b-hf"
Expand Down

0 comments on commit b99abae

Please sign in to comment.