[LLM Inference] Qwen2_Moe Support wint4 #9129
0.00% of diff hit (target 80.00%)
View this Pull Request on Codecov
0.00% of diff hit (target 80.00%)
Annotations
Check warning on line 1304 in paddlenlp/experimental/transformers/fused_transformer_layers.py
codecov / codecov/patch
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1302-L1304
Added lines #L1302 - L1304 were not covered by tests
Check warning on line 1307 in paddlenlp/experimental/transformers/fused_transformer_layers.py
codecov / codecov/patch
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1306-L1307
Added lines #L1306 - L1307 were not covered by tests
Check warning on line 1324 in paddlenlp/experimental/transformers/fused_transformer_layers.py
codecov / codecov/patch
paddlenlp/experimental/transformers/fused_transformer_layers.py#L1322-L1324
Added lines #L1322 - L1324 were not covered by tests