Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed federated learning example #64

Merged
merged 1 commit into from
Jun 27, 2018
Merged

Conversation

wilko77
Copy link
Collaborator

@wilko77 wilko77 commented Jun 27, 2018

For gradient decent in a federated setting to work, it is essential, that all parties start with the same model.
In our example we first let the clients learn a model on their respective data, and then we ran federated learning on top of these individually learned, different models. So sad.

Here I propose to better separate the local learning from the federated learning part. In both cases, all clients will start with an initial model of zeros.

The output now is:

Loading data
Error (MSE) that each client gets on test set by training only on own local data:
Hospital 1:	3921.78
Hospital 2:	3808.58
Hospital 3:	4019.43
Running distributed gradient aggregation for 50 iterations
Error (MSE) that each client gets after running the protocol:
Hospital 1:	3644.47
Hospital 2:	3644.47
Hospital 3:	3644.47

federated setting, otherwise there is no guarantee that gradient decent
will converge to something usefull...
@wilko77 wilko77 requested a review from hardbyte June 27, 2018 03:48
@wilko77 wilko77 merged commit 6e99be0 into master Jun 27, 2018
@wilko77 wilko77 deleted the fix-federated_learning_example branch June 27, 2018 05:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants