Skip to content

Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralised data. The repository tutorial for using PySyft for distributed training of Machine Learning model.

Notifications You must be signed in to change notification settings

saranshmanu/Federated-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Federated Learning

What is Federated Learning?

Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralised data. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. This goes beyond the use of local models that make predictions on mobile devices (like the Mobile Vision API and On-Device Smart Reply) by bringing model training to the device as well.

The goal is a machine learning setting where the goal is to train a high-quality centralised model with training data distributed over a large number of clients each with unreliable and relatively slow network connections.

How does the application works?

With the rise of many famous libraries like Pysyft and Tensorflow Federated. It has become easier for general developers, researchers and machine learning enthusiasts to create a decentralised machine learning training model.

In this project to train a dataset based on the aim to predict housing prices of the properties listed in the city of Boston, I have used PySyft - a Python library for secure, private Deep Learning. PySyft decouples private data from model training, using Multi-Party Computation (MPC) within PyTorch.

While training the Deep Learning prediction model, the data is securely saved locally with Alice and Bob. For a private training, I used a Federated approach where the ML model is trained locally with ondevice capability of mobile devices owned by two parties Alice and Bob. With the rise of computer performance of the mobile devices, it has become easier to train ML models with much more efficiency.

Steps involved in the Federated Learning approach

  1. The mobile devices download the global ML model
  2. Data is being generated while the user is using the application linked with the ML model
  3. As the user starts to interact with the application more, the user gets much better predictions according to his usage
  4. Once the model is ready for the scheduled sync with the server. The personalised model that was getting trained with the on device capability is sent to the server
  5. Models from all the devices are collected and a Federated average function is used to generate a much improved version of the model than the previous one
  6. Once trained the improved version is sent to all the devices where the user get the experience based on the usage by all the devices around the globe

Resources

Frameworks

  • Pytorch
  • Pysyft

About

Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralised data. The repository tutorial for using PySyft for distributed training of Machine Learning model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published