Skip to content

Configure one file for model heterogeneity. Consistent GPU memory usage for single or multiple clients.

License

Notifications You must be signed in to change notification settings

TsingZ0/HtFLlib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HtFLlib: Heterogeneous Federated Learning Library

Standard federated learning, e.g., FedAvg, assumes that all the participating clients build their local models with the same architecture, which limits its utility in real-world scenarios. In practice, clients can build their models with heterogeneous model architectures for specific local tasks. When faced with data heterogeneity, model heterogeneity, communication overhead, and intellectual property (IP) protection, Heterogeneous Federated Learning (HtFL) emerges.

  • 9 data-free HtFL algorithms and 21 heterogeneous model architectures.
  • PFLlib compatible.

Environments

Install CUDA v11.6.

Install conda latest and activate conda.

conda env create -f env_cuda_latest.yaml # You may need to downgrade the torch using pip to match the CUDA version

Scenarios and datasets

Here, we only show the MNIST dataset in the label skew scenario generated via Dirichlet distribution for example. Please refer to my other repository PFLlib for more help.

You can also modify codes in PFLlib to support model heterogeneity scenarios, but it requires much effort. In this repository, you only need to configure system/main.py to support model heterogeneity scenarios.

Note: you may need to manually clean checkpoint files in the temp/ folder via system/clean_temp_files.py if your program crashes accidentally. You can also set a checkpoint folder by yourself to prevent automatic deletion using the -sfn argument in the command line.

Data-free algorithms with code (updating)

Here, "data-free" refers to the absence of any additional dataset beyond the clients' private data. We only consider data-free algorithms here, as they have fewer restrictions and assumptions, making them more valuable and easily extendable to other scenarios, such as the existence of public server data.

Experimental results

You can run total.sh with pre-tuned hyperparameters to obtain some results, like

cd ./system
sh total.sh

Or you can find some results in our accepted FL paper (i.e., FedTGP and FedKTL). Please note that this developing project may not be able to reproduce the results on these papers, since some basic settings may change due to the requests of the community.

About

Configure one file for model heterogeneity. Consistent GPU memory usage for single or multiple clients.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published