Skip to content

Add fused linear for the LLAMA MLP block and multi-head attention block#6425

Merged
zjjlivein merged 4 commits intoPaddlePaddle:developfrom littsk:fused_linear_featureJul 26, 2023