Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update english basic usage document #920

Closed

Conversation

zhouxiao-coder
Copy link
Contributor

Update English version of the "Basic Usage" document. #911

Changes are made according to advice in #620. Since @tianbingsz has previously updated the Chinese version, I also try to make it consistent with that document.

List of changes:

  • Correct grammar mistakes;
  • Change equations to follow the syntax of "rst";
  • Rewrite some sentences according to suggestions in English Document Structure #620 ;

@luotao1

@zhouxiao-coder
Copy link
Contributor Author

@tianbingsz I followed many of your changes in the corresponding Chinese version with one exception: I didn't add a variable ε to the model. Here is my concern, please let me know if you think this is reasonable.

In a word, I think it's too complicated for a simple introduction document like this, and we can safely assume ε=0.

  • To my understanding, ε is a noise that is introduced in the observation procedure. If we assume it exists, then we also need to change the data generation code to reflect this assumption, which seems to be unnecessary.
  • Although it is most often assumed to follow the Gaussian distribution, that’s not necessarily true. Imagine you have a weight scale that always add 2 extra pounds …

-----------------

Suppose the true relationship can be characterized as ``Y = 2X + 0.3``, let's see how to recover this pattern only from observed data. Here is a piece of python code that feeds synthetic data to PaddlePaddle. The code is pretty self-explanatory, the only extra thing you need to add for PaddlePaddle is a definition of input data types.
A PaddlePaddle job usually loads the training data by implementing a Python data provider. A data provider is a Python function which is called by PaddlePaddel trainer program, so it could adapt to any data format. We can write data provider to read from a local file system, HDFS, databases, S3 or almost anywhere. In this example, our data provider synthesizes the training data by sampling from the line :math:`Y=2X + 0.3`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which is called by PaddlePaddel:笔误

Copy link
Contributor

@tianbingsz tianbingsz Dec 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this example, our data provider generates the training data by sampling from the line : math:Y=2X + 0.3.


Problem Background
------------------

Now, to give you a hint of what using PaddlePaddle looks like, let's start with a fundamental learning problem - `simple linear regression <https://en.wikipedia.org/wiki/Simple_linear_regression>`_: you have observed a set of two-dimensional data points of ``X`` and ``Y``, where ``X`` is an explanatory variable and ``Y`` is corresponding dependent variable, and you want to recover the underlying correlation between ``X`` and ``Y``. Linear regression can be used in many practical scenarios. For example, ``X`` can be a variable about house size, and ``Y`` a variable about house price. You can build a model that captures relationship between them by observing real estate markets.
Suppose there are `n` observed data points :math:`\{(x_i, y_i), i=1,..., n\}` of variable :math:`X` and :math:`Y`, and their relation can be characterized as :math:`y_i = wx_i + b`. The goal is to estimate :math:`w` and :math:`b` based on these observations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suppose there are n observed data points :math:\{(x_i, y_i), i=1,..., n\} of variable :math:X and :math:Y, and the true underlying function is :math:y_i = wx_i + b. The goal is to estimate parameters :math:w and :math:b based on these observations.

----------------------

To recover this relationship between ``X`` and ``Y``, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line ``Y' = wX + b`` , then we gradually adapt ``w`` and ``b`` to minimize the difference between ``Y'`` and ``Y``. Here is what it looks like in PaddlePaddle:
To recover this relationship between :math:`X` and :math:`Y`, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line :math:`Y' = wX + b` , then we gradually adapt :math:`w` and :math:`b` to minimize the difference between :math:`Y'` and :math:`Y`. Here is what it looks like in PaddlePaddle:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the neural network to learn this function mapping from :math:X to :math:Y, This network has a linear activation layer and a square error cost layer. Actually, it is just a line in two-dimensional space :math:Y' = wX + b . To learn the parameters, we gradually adjust the :math:w and :math:b to minimize the euclidean difference between math:Y' and :math:Y. Here is the related python code in PaddlePaddle:

@@ -54,14 +54,12 @@ To recover this relationship between ``X`` and ``Y``, we use a neural network wi

Some of the most fundamental usages of PaddlePaddle are demonstrated:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the most fundamental PaddlePaddle usages are :

- **Data Layer**: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for ``X`` and ``Y``.
- **FC Layer**: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as activation function. Computation layers like this are the fundamental building blocks of a deeper model.
- **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters.
- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first part shows how to feed data into PaddlePaddle. In general, PaddlePaddle first reads raw data from a list of files, then do some user-defined pre-process to get the desired inputs. In this case, we only need to create a placeholder file since we are generating synthetic data.

- **FC Layer**: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as activation function. Computation layers like this are the fundamental building blocks of a deeper model.
- **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters.
- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly.
- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum-based optimizer will suffice here, and it processes 12 data points each time.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second part describes the learning algorithm. It defines how the model parameters are adjusted to achieve the optimization of the objective function. PaddlePaddle provides a rich set of optimizers, however, the momentum-based stochastic gradient descent algorithm seems sufficient for most of the applications.

- **Cost Layer**: in training phase, cost layers are usually the last layers of the network. They measure the performance of current model, and provide guidence to adjust parameters.
- The first part shows how to feed data into PaddlePaddle. In general cases, PaddlePaddle reads raw data from a list of files, and then do some user-defined process to get real input. In this case, we only need to create a placeholder file since we are generating synthetic data on the fly.
- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum-based optimizer will suffice here, and it processes 12 data points each time.
- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration:
Copy link
Contributor

@tianbingsz tianbingsz Dec 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally, it is the network configuration, which simply "stacks" layers. There are three kinds of layers used in the configuration:

- The second part describes learning algorithm. It defines in what ways adjustments are made to model parameters. PaddlePaddle provides a rich set of optimizers, but a simple momentum-based optimizer will suffice here, and it processes 12 data points each time.
- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration:
- :code:`Data Layer`: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for :math:`X` and :math:`Y`.
- :code:`FC Layer`: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as the activation function. Computation layers like this are the fundamental building blocks of a deeper model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation with the activation functions. Layers are the fundamental building blocks of the deeper learning models.

- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration:
- :code:`Data Layer`: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for :math:`X` and :math:`Y`.
- :code:`FC Layer`: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as the activation function. Computation layers like this are the fundamental building blocks of a deeper model.
- :code:`Cost Layer`: in training phase, cost layers are usually the last layers of the network. They measure the performance of the current model and provide guidance to adjust parameters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

during model training, cost layer usually is the network's last layer. They evaluate the model's performance, used as the objective function to be optimized with parameters tuning.

- Finally, the network configuration. It usually is as simple as "stacking" layers. Three kinds of layers are used in this configuration:
- :code:`Data Layer`: a network always starts with one or more data layers. They provide input data to the rest of the network. In this problem, two data layers are used respectively for :math:`X` and :math:`Y`.
- :code:`FC Layer`: FC layer is short for Fully Connected Layer, which connects all the input units to current layer and does the actual computation specified as the activation function. Computation layers like this are the fundamental building blocks of a deeper model.
- :code:`Cost Layer`: in training phase, cost layers are usually the last layers of the network. They measure the performance of the current model and provide guidance to adjust parameters.

Now that everything is ready, you can train the network with a simple command line call:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now everything is ready, you can train the network with a simple command line :

@@ -70,20 +68,19 @@ Now that everything is ready, you can train the network with a simple command li
paddle train --config=trainer_config.py --save_dir=./output --num_passes=30


This means that PaddlePaddle will train this network on the synthectic dataset for 30 passes, and save all the models under path ``./output``. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess.
This means that PaddlePaddle will train this network on the synthetic dataset for 30 passes, and save all the models under the path :code:`./output`. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here PaddlePaddle will train this network on the synthetic dataset for 30 passes, save all the models under the path :code:./output. During training, from the log messages, the model cost is decreasing along with the increasing number of passes, which indicates we are getting the closer "guess" of the learning parameters.



Evaluate the Model
-------------------

Usually, a different dataset that left out during training phase should be used to evalute the models. However, we are lucky enough to know the real answer: ``w=2, b=0.3``, thus a better option is to check out model parameters directly.
Usually, a different dataset that left out during training phase should be used to evaluate the models. However, we are lucky enough to know the real answer: :math:`w=2, b=0.3`, thus a better option is to check out model parameters directly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, test dataset (different from training dataset) is needed to evaluate the models. However, we have already known the exact function :math: w=2, b=0.3, thus we can check out the model parameters directly.


In PaddlePaddle, training is just to get a collection of model parameters, which are ``w`` and ``b`` in this case. Each parameter is saved in an individual file in the popular ``numpy`` array format. Here is the code that reads parameters from last pass.
In PaddlePaddle, training is just to get a collection of model parameters, which are :math:`w` and :math:`b` in this case. Each parameter is saved in an individual file in the popular :code:`numpy` array format. Here is the code that reads parameters from the last pass.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In PaddlePaddle, model training is to learn parameters, such as :math:w and :math:b in our case. Each parameter is saved in an individual file in the :code:numpy array format. Here is the code that reads parameters from the last pass.


There, you have recovered the underlying pattern between ``X`` and ``Y`` only from observed data.
There, you have recovered the underlying pattern between :math:`X` and :math:`Y` only from observed data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally, we have learned the underlying mapping function from :math:X to :math:Y from observed data.

@@ -96,6 +93,6 @@ In PaddlePaddle, training is just to get a collection of model parameters, which
.. image:: parameters.png
:align: center

Although starts from a random guess, you can see that value of ``w`` changes quickly towards 2 and ``b`` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer.
Although starts from a random guess, you can see that value of :math:`w` changes quickly towards 2 and :math:`b` changes quickly towards 0.3. In the end, the predicted line is almost identical with the real answer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although initially, we only start from a random guess, the parameter values of :math:w and :math:b update quickly towards 2 and 0.3. In the end, the estimation is almost identical with the true underlying function.

Copy link
Contributor

@tianbingsz tianbingsz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just did a quick round of review. Please read the document and comments carefully again and feel free to polish the document further...

@luotao1 luotao1 closed this Apr 28, 2017
wangxicoding pushed a commit to wangxicoding/Paddle that referenced this pull request Dec 9, 2021
lizexu123 pushed a commit to lizexu123/Paddle that referenced this pull request Feb 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants