-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add sparse update section in fluid dist doc #8997
add sparse update section in fluid dist doc #8997
Conversation
@@ -59,6 +59,13 @@ After converting: | |||
queue. It will block until the queue has the required number of | |||
tensors. | |||
|
|||
### Sparse Update | |||
|
|||
For an embedding layer, the gradient maybe be very sparse(upper 90% is zero) for each mini-batch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be a space between sparse and (
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
upper => up to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the gradient maybe be very sparse => the gradient may have many rows containing only 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
### Sparse Update | ||
|
||
For an embedding layer, the gradient maybe be very sparse(upper 90% is zero) for each mini-batch. | ||
Fluid use [SelectedRows](../selected_rows.md) to support the sparse variable. Distributed training support `Sparse Update`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the sparse variable => sparse variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
support => supports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Distributed training support Sparse Update
, which sends a SelectedRows
variable to the parameter server to run parameter updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
It would save a lot of bandwidth and make the distributed training job have better performance. | ||
For embedding layers, the gradient may have many rows containing only 0 when training, | ||
if the gradient use a dense tensor to do parameter optimization, | ||
it could spend unnessesary memory, slow down the calculations and waste |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unnessesary => unnecessary
### Sparse Update | ||
|
||
For embedding layers, the gradient may have many rows containing only 0 when training, | ||
if the gradient use a dense tensor to do parameter optimization, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use -> uses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM++
Fixed #8996