Skip to content

Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)

License

Notifications You must be signed in to change notification settings

SuperBruceJia/Awesome-Mixture-of-Experts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

Awesome Mixture of Experts (MoE)

Awesome Mixture of Experts: A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)

Awesome License: MIT Made With Love

This repository, called Awesome Mixture of Experts, contains a collection of resources and papers on Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME).

Welcome to share your papers, thoughts, and ideas by submitting an issue!

Contents

Course

CS324: Large Language Models - Selective Architectures
Percy Liang, Tatsunori Hashimoto, Christopher Ré
Stanford University, [Link]
Winter 2022

CSC321: Introduction to Neural Networks and Machine Learning - Mixtures of Experts
Geoffrey Hinton
University of Toronto, [Link]
Winter 2014

CS2750: Machine Learning - Ensamble Methods and Mixtures of Experts
Milos Hauskrecht
University of Pittsburgh, [Link]
Spring 2004

Presentation

Mixture-of-Experts in the Era of LLMs: A New Odyssey
Tianlong Chen, Yu Cheng, Beidi Chen, Minjia Zhang, Mohit Bansal
ICML 2024, [Link] [Slides]
2024

Books

The Path to Artificial General Intelligence: Insights from Adversarial LLM Dialogue
Edward Y. Chang
Stanford University, [Link]
March 2024

Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media
Gerhard Paaß, Sven Giesselbach
Artificial Intelligence: Foundations, Theory, and Algorithms (Springer Nature), [Link]
16 Feb 2023

Papers

Survey

A Survey on Mixture of Experts
Weilin Cai, Juyong Jiang, Fan Wang, Jing Tang, Sunghun Kim, Jiayi Huang
arXiv, [Paper] [GitHub]
8 Aug 2024

Routers in Vision Mixture of Experts: An Empirical Study
Tianlin Liu, Mathieu Blondel, Carlos Riquelme, Joan Puigcerver
TMLR, [Paper]
18 Apr 2024

Foundational Work

Sparse-Gated Mixture of Experts in Transformer

Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
William Fedus, Barret Zoph, Noam Shazeer
JMLR, [Paper]
16 Jun 2022

Sparse-Gated Mixture of Experts in LSTM

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
ICLR 2017, [Paper]
23 Jan 2017

Hierarchical Mixtures of Experts for the EM Algorithm

Hierarchical Mixtures of Experts and the EM Algorithm
Michael I. Jordan, Robert A. Jacobs
Neural Computation, [Paper]
1993

Mixtures of Experts Architecture

Adaptive Mixtures of Local Experts
Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, Geoffrey E. Hinton
Neural Computation, [Paper]
1991

Sparse Gating Mechanism

Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts
Yunxin Li, Shenyuan Jiang, Baotian Hu, Longyue Wang, Wanqi Zhong, Wenhan Luo, Lin Ma, Min Zhang
arXiv, [Paper]
18 May 2024

Fine-grained and Shared Experts - DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Damai Dai, Chengqi Deng, Chenggang Zhao, R.X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y.K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, Wenfeng Liang
arXiv, [Paper]
11 Jan 2024

Mistral AI - Mixtral of Experts
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed
arXiv, [Paper]
8 Jan 2024

PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts
Yunshui Li, Binyuan Hui, ZhiChao Yin, Min Yang, Fei Huang, Yongbin Li
ACL 2023, [Paper]
13 Jun 2023

Scaling Vision-Language Models with Sparse Mixture of Experts
Sheng Shen, Zhewei Yao, Chunyuan Li, Trevor Darrell, Kurt Keutzer, Yuxiong He
arXiv, [Paper]
13 Mar 2023

Mixture of Attention Heads (MoA) - Mixture of Attention Heads: Selecting Attention Heads Per Token
Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie Zhou, Wenge Rong, Zhang Xiong
EMNLP 2022, [Paper]
11 Oct 2022

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Rodolphe Jenatton, Neil Houlsby
arXiv, [Paper]
6 Jun 2022

Pyramid Design of Experts - DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale
Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He
ICML 2022, [Paper]
21 Jul 2022

k-group Top-1 Routing for Expert Prototyping - M6-T: Exploring Sparse Expert Models and Beyond
An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, Di Zhang, Wei Lin, Lin Qu, Jingren Zhou, Hongxia Yang
arXiv, [Paper]
9 Aug 2021

Scaling Vision with Sparse Mixture of Experts
Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby
arXiv, [Paper]
10 Jun 2021

Routing as a Linear Assignment Problem - BASE Layers: Simplifying Training of Large, Sparse Models
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer
arXiv, [Paper]
30 Mar 2021

Parameter-efficient Fine-tuning

Shared FFN - MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang
arXiv, [Paper]
20 Jul 2024

FFN - MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Bin Lin, Zhenyu Tang, Yang Ye, Jiaxi Cui, Bin Zhu, Peng Jin, Jinfa Huang, Junwu Zhang, Yatian Pang, Munan Ning, Li Yuan
arXiv, [Paper] [Codes]
6 Jul 2024

"q_proj", "v_proj" (InstructBLIP) and "up_proj", "down_proj" (LLaVA-1.5) - MoCLE: Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning
Yunhao Gou, Zhili Liu, Kai Chen, Lanqing Hong, Hang Xu, Aoxue Li, Dit-Yan Yeung, James T. Kwok, Yu Zhang
arXiv, [Paper]
4 Jul 2024

All the Layers - Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts
Jialin Wu, Xia Hu, Yaqing Wang, Bo Pang, Radu Soricut
arXiv, [Paper]
2 Apr 2024

FFN - LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin
Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
arXiv, [Paper]
8 Mar 2024

"q_proj", "p_proj" - MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, Kang Liu
arXiv, [Paper]
20 Feb 2024

FFN - MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning
Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang
arXiv, [Paper] 17 Feb 2024

All the Layers - MoLA: Higher Layers Need More LoRA Experts
Chongyang Gao, Kezhen Chen, Jinmeng Rao, Baochen Sun, Ruibo Liu, Daiyi Peng, Yawen Zhang, Xiaoyuan Guo, Jie Yang, VS Subrahmanian
arXiv, [Paper]
13 Feb 2024

FFN - LLaVA-MoLE: Sparse Mixture of LoRA Experts for Mitigating Data Conflicts in Instruction Finetuning MLLMs
Shaoxiang Chen, Zequn Jie, Lin Ma
arXiv, [Paper]
30 Jan 2024

Attention Projections - SiRA: Sparse Mixture of Low Rank Adaptation
Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng
arXiv, [[Paper](15 Nov 2023)]
15 Nov 2023

Mixture of Vectors (MoV) & Mixture of LORA (MoLORA) - Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning
Ted Zadouri, Ahmet Üstün, Arash Ahmadian, Beyza Ermiş, Acyr Locatelli, Sara Hooker
arXiv, [Paper]
11 Sep 2023

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
EMNLP 2022, [Paper]
2 Nov 2022

Auxiliary Load Balance Loss

Load Balance Loss

Load Balance Loss - MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang
arXiv, [Paper]
20 Jul 2024

Load Balance Loss - MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Bin Lin, Zhenyu Tang, Yang Ye, Jiaxi Cui, Bin Zhu, Peng Jin, Jinfa Huang, Junwu Zhang, Yatian Pang, Munan Ning, Li Yuan
arXiv, [Paper] [Codes]
6 Jul 2024

Load Balance Loss and Router z-loss - JetMoE: Reaching Llama2 Performance with 0.1M Dollars
Yikang Shen, Zhen Guo, Tianle Cai, Zengyi Qin
arXiv, [Paper]
11 Apr 2024

Load Balance Loss and Router z-loss - OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You
arXiv, [Paper]
27 Mar 2024

Localized Balancing Constraint Loss - LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin
Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
arXiv, [Paper]
8 Mar 2024

Load Balance Loss and Contrastive Loss - MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, Kang Liu
arXiv, [Paper]
20 Feb 2024

Load Balance Loss - SiRA: Sparse Mixture of Low Rank Adaptation
Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng
arXiv, [[Paper](15 Nov 2023)]
15 Nov 2023

Top-1 Routing and Load Balance Loss & Sparse-Gated Mixture of Experts in Transformer - Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
William Fedus, Barret Zoph, Noam Shazeer
JMLR, [Paper]
16 Jun 2022

Top-2 Routing and Mean Gates Per Experts Loss - GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen
arXiv, [arXiv]
30 Jun 2020

Top-k Routing and Importance/Load Balance Losses & Sparse-Gated Mixture of Experts in LSTM - Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
ICLR 2017, [Paper]
23 Jan 2017

z-loss

Load Balance Loss and Router z-loss - JetMoE: Reaching Llama2 Performance with 0.1M Dollars
Yikang Shen, Zhen Guo, Tianle Cai, Zengyi Qin
arXiv, [Paper]
11 Apr 2024

Load Balance Loss and Router z-loss - OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You
arXiv, [Paper]
27 Mar 2024

Router z-loss - ST-MoE: Designing Stable and Transferable Sparse Expert Models
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, William Fedus
arXiv, [Paper]
29 Apr 2022

Mutual Information Loss

Mutual Information Loss and Mixture of Attention - Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models
Bowen Pan, Yikang Shen, Haokun Liu, Mayank Mishra, Gaoyuan Zhang, Aude Oliva, Colin Raffel, Rameswar Panda
arXiv, [Paper]
8 Apr 2024

Mutual Information Loss - ModuleFormer: Modularity Emerges from Mixture-of-Experts
Yikang Shen, Zheyu Zhang, Tianyou Cao, Shawn Tan, Zhenfang Chen, Chuang Gan
arXiv, [Paper]
11 Sep 2023

Mutual Information Loss - Mod-Squad: Designing Mixture of Experts As Modular Multi-Task Learners
Zitian Chen, Yikang Shen, Mingyu Ding, Zhenfang Chen, Hengshuang Zhao, Erik Learned-Miller, Chuang Gan
CVPR 2023, [Paper]
15 Dec 2022

Expert Capacity Limit

Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
Yongxin Guo, Zhenglin Cheng, Xiaoying Tang, Tao Lin
arXiv, [Paper]
23 May 2024

Expert capacity Threshold - GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen
arXiv, [arXiv]
30 Jun 2020

Non-trainable Gating Mechanism

Random Assignment

For Inference - Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast
Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, Yu Meng
arXiv, [Paper]
23 May 2024

Randomly Allocate 2 Experts - Taming Sparsely Activated Transformer with Stochastic Experts
Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, Jianfeng Gao
ICLR 2022, [Paper]
3 Feb 2022

Hash Routing - Hash Layers For Large Sparse Models
Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston
arXiv, [Paper]
20 Jul 2021

Domain Mapping

DEMix Layers: Disentangling Domains for Modular Language Modeling
Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer
arXiv, [Paper]
20 Aug 2021

Expert-choice Gating

Expert Chooses Tokens - Mixture-of-Experts with Expert Choice Routing
Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, James Laudon
NeurIPS 2022, [Paper]
14 Oct 2022

From Dense to Sparse

Sparse Upcycling

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby
ICLR 2023, [Paper]
17 Feb 2023

Sparse Splitting

Neuron-Independent and Neuron-Sharing - LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
Tong Zhu, Xiaoye Qu, Daize Dong, Jiacheng Ruan, Jingqi Tong, Conghui He, Yu Cheng
arXiv, [Paper]
24 Jun 2024

Evenly Splitting - Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers
Tianlong Chen, Zhenyu Zhang, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang
arXiv, [Paper]
2 Mar 2023

Activation Diversity of Different Neurons - MoEfication: Transformer Feed-forward Layers are Mixtures of Experts
Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou
ACL Findings 2022, [Paper]
5 Apr 2022

Dense Gating Mechanism

All the experts are activated.
MoELoRA - MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, Kang Liu
arXiv, [Paper]
20 Feb 2024

Soft Gating Mechanism

Dense Gating Mechanism + Gating-weighted Merging of Input Tokens or Experts

Token Merging

From Sparse to Soft Mixtures of Experts
Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Neil Houlsby
ICLR 2024, [Paper]
27 May 2024

Expert Merging

Soft Merging of Experts with Adaptive Routing
Mohammed Muqeeth, Haokun Liu, Colin Raffel
TMLR, [Paper]
13 May 2024

Acknowledgement

This project is sponsored by the PodGPT group, Kolachalama Laboratory at Boston University.