Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename examples using dash instead of underscore #2138

Merged
merged 5 commits into from
Jul 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,25 +113,25 @@ Several code examples show different usage scenarios of Flower (in combination w

Quickstart examples:

* [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart_tensorflow)
* [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart_pytorch)
* [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart_huggingface)
* [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart_pytorch_lightning)
* [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart_fastai)
* [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart_pandas)
* [Quickstart (MXNet)](https://github.com/adap/flower/tree/main/examples/quickstart_mxnet)
* [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart_jax)
* [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow)
* [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch)
* [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface)
* [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch_lightning)
* [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart-fastai)
* [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart-pandas)
* [Quickstart (MXNet)](https://github.com/adap/flower/tree/main/examples/quickstart-mxnet)
* [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax)
* [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist)
* [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android)
* [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios)

Other [examples](https://github.com/adap/flower/tree/main/examples):

* [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded_devices)
* [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch_from_centralized_to_federated)
* [MXNet: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/mxnet_from_centralized_to_federated)
* [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced_tensorflow)
* [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced_pytorch)
* [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded-devices)
* [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated)
* [MXNet: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/mxnet-from-centralized-to-federated)
* [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow)
* [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)
* Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation_pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation_tensorflow))

## Community
Expand Down
2 changes: 1 addition & 1 deletion doc/source/evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,4 +174,4 @@ Model parameters can also be evaluated during training. :code:`Client.fit` can r
Full Code Example
-----------------

For a full code example that uses both centralized and federated evaluation, see the *Advanced TensorFlow Example* (the same approach can be applied to workloads implemented in any other framework): https://github.com/adap/flower/tree/main/examples/advanced_tensorflow
For a full code example that uses both centralized and federated evaluation, see the *Advanced TensorFlow Example* (the same approach can be applied to workloads implemented in any other framework): https://github.com/adap/flower/tree/main/examples/advanced-tensorflow
4 changes: 2 additions & 2 deletions doc/source/example-jax-from-centralized-to-federated.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Example: JAX - Run JAX Federated

This tutorial will show you how to use Flower to build a federated version of an existing JAX workload.
We are using JAX to train a linear regression model on a scikit-learn dataset.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch_from_centralized_to_federated>`_ walkthrough.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch-from-centralized-to-federated>`_ walkthrough.
First, we build a centralized training approach based on the `Linear Regression with JAX <https://coax.readthedocs.io/en/latest/examples/linear_regression/jax.html>`_ tutorial`.
Then, we build upon the centralized training code to run the training in a federated fashion.

Expand Down Expand Up @@ -276,7 +276,7 @@ in each window (make sure that the server is still running before you do so) and
Next Steps
----------

The source code of this example was improved over time and can be found here: `Quickstart JAX <https://github.com/adap/flower/blob/main/examples/quickstart_jax>`_.
The source code of this example was improved over time and can be found here: `Quickstart JAX <https://github.com/adap/flower/blob/main/examples/quickstart-jax>`_.
Our example is somewhat over-simplified because both clients load the same dataset.

You're now prepared to explore this topic further. How about using a more sophisticated model or using a different dataset? How about adding more clients?
4 changes: 2 additions & 2 deletions doc/source/example-mxnet-walk-through.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Example: MXNet - Run MXNet Federated

This tutorial will show you how to use Flower to build a federated version of an existing MXNet workload.
We are using MXNet to train a Sequential model on the MNIST dataset.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch_from_centralized_to_federated>`_ walkthrough. MXNet and PyTorch are very similar and a very good comparison between MXNet and PyTorch is given `here <https://mxnet.apache.org/versions/1.7.0/api/python/docs/tutorials/getting-started/to-mxnet/pytorch.html>`_.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch-from-centralized-to-federated>`_ walkthrough. MXNet and PyTorch are very similar and a very good comparison between MXNet and PyTorch is given `here <https://mxnet.apache.org/versions/1.7.0/api/python/docs/tutorials/getting-started/to-mxnet/pytorch.html>`_.
First, we build a centralized training approach based on the `Handwritten Digit Recognition <https://mxnet.apache.org/versions/1.7.0/api/python/docs/tutorials/packages/gluon/image/mnist.html>`_ tutorial.
Then, we build upon the centralized training code to run the training in a federated fashion.

Expand Down Expand Up @@ -355,6 +355,6 @@ in each window (make sure that the server is still running before you do so) and
Next Steps
----------

The full source code for this example: `MXNet: From Centralized To Federated (Code) <https://github.com/adap/flower/blob/main/examples/mxnet_from_centralized_to_federated>`_.
The full source code for this example: `MXNet: From Centralized To Federated (Code) <https://github.com/adap/flower/blob/main/examples/mxnet-from-centralized-to-federated>`_.
Our example is of course somewhat over-simplified because both clients load the exact same dataset, which isn't realistic.
You're now prepared to explore this topic further. How about using a CNN or using a different dataset? How about adding more clients?
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,6 @@ in each window (make sure that the server is running before you do so) and see y
Next Steps
----------

The full source code for this example: `PyTorch: From Centralized To Federated (Code) <https://github.com/adap/flower/blob/main/examples/pytorch_from_centralized_to_federated>`_.
The full source code for this example: `PyTorch: From Centralized To Federated (Code) <https://github.com/adap/flower/blob/main/examples/pytorch-from-centralized-to-federated>`_.
Our example is, of course, somewhat over-simplified because both clients load the exact same dataset, which isn't realistic.
You're now prepared to explore this topic further. How about using different subsets of CIFAR-10 on each client? How about adding more clients?
12 changes: 6 additions & 6 deletions doc/source/example-walkthrough-pytorch-mnist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Inside the server helper script *run-server.sh* you will find the following code

.. code-block:: bash

python -m flwr_example.quickstart_pytorch.server
python -m flwr_example.quickstart-pytorch.server


We can go a bit deeper and see that :code:`server.py` simply launches a server that will coordinate three rounds of training.
Expand All @@ -92,7 +92,7 @@ Next, let's take a look at the *run-clients.sh* file. You will see that it conta

.. code-block:: bash

python -m flwr_example.quickstart_pytorch.client \
python -m flwr_example.quickstart-pytorch.client \
--cid=$i \
--server_address=$SERVER_ADDRESS \
--nb_clients=$NUM_CLIENTS
Expand All @@ -101,7 +101,7 @@ Next, let's take a look at the *run-clients.sh* file. You will see that it conta
* **sever_address**: String that identifies IP and port of the server.
* **nb_clients**: This defines the number of clients being created. This piece of information is not required by the client, but it helps us partition the original MNIST dataset to make sure that every client is working on unique subsets of both *training* and *test* sets.

Again, we can go deeper and look inside :code:`flwr_example/quickstart_pytorch/client.py`.
Again, we can go deeper and look inside :code:`flwr_example/quickstart-pytorch/client.py`.
After going through the argument parsing code at the beginning of our :code:`main` function, you will find a call to :code:`mnist.load_data`. This function is responsible for partitioning the original MNIST datasets (*training* and *test*) and returning a :code:`torch.utils.data.DataLoader` s for each of them.
We then instantiate a :code:`PytorchMNISTClient` object with our client ID, our DataLoaders, the number of epochs in each round, and which device we want to use for training (CPU or GPU).

Expand All @@ -122,7 +122,7 @@ The :code:`PytorchMNISTClient` object when finally passed to :code:`fl.client.st
A Closer Look
-------------

Now, let's look closely into the :code:`PytorchMNISTClient` inside :code:`flwr_example.quickstart_pytorch.mnist` and see what it is doing:
Now, let's look closely into the :code:`PytorchMNISTClient` inside :code:`flwr_example.quickstart-pytorch.mnist` and see what it is doing:

.. code-block:: python

Expand Down Expand Up @@ -241,7 +241,7 @@ The first thing to notice is that :code:`PytorchMNISTClient` instantiates a CNN
self.model = MNISTNet().to(device)
...

The code for the CNN is available under :code:`quickstart_pytorch.mnist` and it is reproduced below. It is the same network found in `Basic MNIST Example <https://github.com/pytorch/examples/tree/master/mnist>`_.
The code for the CNN is available under :code:`quickstart-pytorch.mnist` and it is reproduced below. It is the same network found in `Basic MNIST Example <https://github.com/pytorch/examples/tree/master/mnist>`_.

.. code-block:: python

Expand Down Expand Up @@ -314,7 +314,7 @@ The second thing to notice is that :code:`PytorchMNISTClient` class inherits fro

When comparing the abstract class to its derived class :code:`PytorchMNISTClient` you will notice that :code:`fit` calls a :code:`train` function and that :code:`evaluate` calls a :code:`test`: function.

These functions can both be found inside the same :code:`quickstart_pytorch.mnist` module:
These functions can both be found inside the same :code:`quickstart-pytorch.mnist` module:

.. code-block:: python

Expand Down
10 changes: 5 additions & 5 deletions doc/source/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Quickstart TensorFlow/Keras
The TensorFlow/Keras quickstart example shows CIFAR-10 image classification
with MobileNetV2:

- `Quickstart TensorFlow (Code) <https://github.com/adap/flower/tree/main/examples/quickstart_tensorflow>`_
- `Quickstart TensorFlow (Code) <https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow>`_
- `Quickstart TensorFlow (Tutorial) <https://flower.dev/docs/quickstart-tensorflow.html>`_
- `Quickstart TensorFlow (Blog Post) <https://flower.dev/blog/2020-12-11-federated-learning-in-less-than-20-lines-of-code>`_

Expand All @@ -33,7 +33,7 @@ Quickstart PyTorch
The PyTorch quickstart example shows CIFAR-10 image classification
with a simple Convolutional Neural Network:

- `Quickstart PyTorch (Code) <https://github.com/adap/flower/tree/main/examples/quickstart_pytorch>`_
- `Quickstart PyTorch (Code) <https://github.com/adap/flower/tree/main/examples/quickstart-pytorch>`_
- `Quickstart PyTorch (Tutorial) <https://flower.dev/docs/quickstart-pytorch.html>`_


Expand All @@ -42,7 +42,7 @@ PyTorch: From Centralized To Federated

This example shows how a regular PyTorch project can be federated using Flower:

- `PyTorch: From Centralized To Federated (Code) <https://github.com/adap/flower/tree/main/examples/pytorch_from_centralized_to_federated>`_
- `PyTorch: From Centralized To Federated (Code) <https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated>`_
- `PyTorch: From Centralized To Federated (Tutorial) <https://flower.dev/docs/example-pytorch-from-centralized-to-federated.html>`_


Expand All @@ -51,8 +51,8 @@ Federated Learning on Raspberry Pi and Nvidia Jetson

This example shows how Flower can be used to build a federated learning system that run across Raspberry Pi and Nvidia Jetson:

- `Federated Learning on Raspberry Pi and Nvidia Jetson (Code) <https://github.com/adap/flower/tree/main/examples/embedded_devices>`_
- `Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) <https://flower.dev/blog/2020-12-16-running_federated_learning_applications_on_embedded_devices_with_flower>`_
- `Federated Learning on Raspberry Pi and Nvidia Jetson (Code) <https://github.com/adap/flower/tree/main/examples/embedded-devices>`_
- `Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) <https://flower.dev/blog/2020-12-16-running_federated_learning_applications_on_embedded-devices_with_flower>`_
danieljanes marked this conversation as resolved.
Show resolved Hide resolved



Expand Down
2 changes: 1 addition & 1 deletion doc/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This page collects answers to commonly asked questions about Federated Learning

.. dropdown:: :fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?

Find the `blog post about federated learning on embedded device here <https://flower.dev/blog/2020-12-16-running_federated_learning_applications_on_embedded_devices_with_flower>`_ and the corresponding `GitHub code example <https://github.com/adap/flower/tree/main/examples/embedded_devices>`_.
Find the `blog post about federated learning on embedded device here <https://flower.dev/blog/2020-12-16-running_federated_learning_applications_on_embedded_devices_with_flower>`_ and the corresponding `GitHub code example <https://github.com/adap/flower/tree/main/examples/embedded-devices>`_.

.. dropdown:: :fa:`eye,mr-1` Does Flower support federated learning on Android devices?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,6 @@ in each window (make sure that the server is still running before you do so) and
Next Steps
----------

The full source code for this example can be found `here <https://github.com/adap/flower/blob/main/examples/pytorch_from_centralized_to_federated>`_.
The full source code for this example can be found `here <https://github.com/adap/flower/blob/main/examples/pytorch-from-centralized-to-federated>`_.
Our example is of course somewhat over-simplified because both clients load the exact same dataset, which isn't realistic.
You're now prepared to explore this topic further. How about using different subsets of CIFAR-10 on each client? How about adding more clients?
2 changes: 1 addition & 1 deletion doc/source/quickstart-fastai.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Quickstart fastai

Let's build a federated learning system using fastai and Flower!

Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart_fastai>`_ to learn more.
Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart-fastai>`_ to learn more.
2 changes: 1 addition & 1 deletion doc/source/quickstart-huggingface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ And they will be able to connect to the server and start the federated training.

If you want to check out everything put together,
you should check out the full code example:
[https://github.com/adap/flower/tree/main/examples/quickstart_huggingface](https://github.com/adap/flower/tree/main/examples/quickstart_huggingface).
[https://github.com/adap/flower/tree/main/examples/quickstart-huggingface](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface).

Of course, this is a very basic example, and a lot can be added or modified,
it was just to showcase how simply we could federate a Hugging Face workflow using Flower.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/quickstart-jax.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Quickstart JAX

This tutorial will show you how to use Flower to build a federated version of an existing JAX workload.
We are using JAX to train a linear regression model on a scikit-learn dataset.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch_from_centralized_to_federated>`_ walkthrough.
We will structure the example similar to our `PyTorch - From Centralized To Federated <https://github.com/adap/flower/blob/main/examples/pytorch-from-centralized-to-federated>`_ walkthrough.
First, we build a centralized training approach based on the `Linear Regression with JAX <https://coax.readthedocs.io/en/latest/examples/linear_regression/jax.html>`_ tutorial`.
Then, we build upon the centralized training code to run the training in a federated fashion.

Expand Down Expand Up @@ -279,7 +279,7 @@ in each window (make sure that the server is still running before you do so) and
Next Steps
----------

The source code of this example was improved over time and can be found here: `Quickstart JAX <https://github.com/adap/flower/blob/main/examples/quickstart_jax>`_.
The source code of this example was improved over time and can be found here: `Quickstart JAX <https://github.com/adap/flower/blob/main/examples/quickstart-jax>`_.
Our example is somewhat over-simplified because both clients load the same dataset.

You're now prepared to explore this topic further. How about using a more sophisticated model or using a different dataset? How about adding more clients?
2 changes: 1 addition & 1 deletion doc/source/quickstart-mxnet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -288,4 +288,4 @@ You should now see how the training does in the very first terminal (the one tha

Congratulations!
You've successfully built and run your first federated learning system.
The full `source code <https://github.com/adap/flower/blob/main/examples/quickstart_mxnet/client.py>`_ for this example can be found in :code:`examples/quickstart_mxnet`.
The full `source code <https://github.com/adap/flower/blob/main/examples/quickstart-mxnet/client.py>`_ for this example can be found in :code:`examples/quickstart-mxnet`.
2 changes: 1 addition & 1 deletion doc/source/quickstart-pandas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Quickstart Pandas

Let's build a federated analytics system using Pandas and Flower!

Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart_pandas>`_ to learn more.
Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart-pandas>`_ to learn more.
2 changes: 1 addition & 1 deletion doc/source/quickstart-pytorch-lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Quickstart PyTorch Lightning

Let's build a federated learning system using PyTorch Lightning and Flower!

Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart_pytorch_lightning>`_ to learn more.
Please refer to the `full code example <https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning>`_ to learn more.
2 changes: 1 addition & 1 deletion doc/source/quickstart-pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -264,4 +264,4 @@ You should now see how the training does in the very first terminal (the one tha

Congratulations!
You've successfully built and run your first federated learning system.
The full `source code <https://github.com/adap/flower/blob/main/examples/quickstart_pytorch/client.py>`_ for this example can be found in :code:`examples/quickstart_pytorch`.
The full `source code <https://github.com/adap/flower/blob/main/examples/quickstart-pytorch/client.py>`_ for this example can be found in :code:`examples/quickstart-pytorch`.
Loading