Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make XavierInitializer default value & Improve setInitializer #664

Merged
merged 1 commit into from
Feb 19, 2021

Conversation

stu1130
Copy link
Contributor

@stu1130 stu1130 commented Feb 18, 2021

Description

Brief description of what this PR is about

  • If this change is a backward incompatible change, why must this change be made?
  • Interesting edge cases to note here

*
* @param loss the loss to use for training
*/
public DefaultTrainingConfig(Loss loss) {
// Defaults to initializer defined in https://arxiv.org/abs/1502.01852
this.initializer = new XavierInitializer(RandomType.GAUSSIAN, FactorType.IN, 2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you keep this as the default initializer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XavierInitializer is already a default initializer. You can check Parameter Type weight

Copy link
Contributor Author

@stu1130 stu1130 Feb 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied the parameter of XavierInitializer from TensorFlow. Let me know if it is really optimal
https://www.tensorflow.org/api_docs/python/tf/keras/initializers/GlorotUniform

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure the best way to present this. Right now, there are two papers on Xavier initializer (that I know of at least). The default parameters in new XavierInitializer() are the first paper because it also named the initializer. The new XavierInitializer(RandomType.GAUSSIAN, FactorType.IN, 2) are from the second which improved upon it. We probably should make that a constant in XavierInitializer or something, but I think we should keep our default initializer to the second paper rather than reverting it to the first paper.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

api/src/main/java/ai/djl/training/TrainingConfig.java Outdated Show resolved Hide resolved
@stu1130 stu1130 merged commit 41fd705 into deepjavalibrary:block_improvement Feb 19, 2021
stu1130 added a commit that referenced this pull request Mar 2, 2021
* Use builder pattern for Parameter (#661)

* Make XavierInitializer default value & Improve setInitializer (#664)

* Refactor initialize (#675)

* Remove NDManager on getOutputShapes (#710)
zachgk added a commit that referenced this pull request Mar 11, 2021
* This creates the component which will populate the Download Tab with Download Buttons.

* Making a place for the download buttons.

* Adding the Model Download Handler allowing the backend to feed the links into the Model View and making slight changes for readablity.

* Getting rid of some of the test code.

* Improve Block usability (#712)

* Use builder pattern for Parameter (#661)

* Make XavierInitializer default value & Improve setInitializer (#664)

* Refactor initialize (#675)

* Remove NDManager on getOutputShapes (#710)

* Removing unnecessary logging messages.

* block factory init commit (#697)

* [DOCS] Fixing TrainingListener documentation (#718)

* Fixing TrainingListener documentation

* Fixing PR reviews

* Fix DJL serving flaky test for mac (#721)

Change-Id: I9eccc84b0c34652e50c5fe5a4fe42f2b82d65a3d

* Fixing all of the nits.

* Getting rid of unnecessary methods.

* update onnxruntime along with String tensor (#724)

* Add profiler doc (#722)

* Resolving some comments.

* Using a better criteria incase multiple models have the same name.

* Fixing the java doc.

* Configure verbose of mxnet extra libraries (#728)

Change-Id: I66d54aa496cccbb9e8c0a89eeaa458605958d9c6

* Added a TODO for using the artifact repo to get the base uri.

* paddlepaddle CN notebook (#730)

* paddlepaddle CN notebook

* install font

Change-Id: I2d749e617b0bf78ecbcd168b82c53a1fab49a2c0

* refactor on name

Change-Id: I9e379eee51ceae16391850b3ba9782acb04c4021

* Refine the text

Co-authored-by: gstu1130 <gstu1130@gmail.com>

* add EI documentation (#733)

* add EI documentation

* fix pmd rules

Change-Id: Ieee5577c26f6df2843781f8f9180de35069a5de3

* allow pytorch stream model loading (#729)

* allow pytorch stream model loading

* updates

Change-Id: Ibc26261b90de673712e90de0d640a8f32f23763e

* add NDList decode from inputStream (#734)

Change-Id: I6a31d8b0b955f2dbb762220b101e3928a34699c1

* Remove memory scope and improve memory management (#695)

The MemoryScope reveals a number of shortcomings within the DJL memory
management. While the MemoryScope is deleted, many of them are fixed as part of
this PR.

First, the NDManager.{attach, detach} were renamed to xxxInternal. This is to
differentiate them from the attach and detach methods that are intended to be used.

There are two new concepts in memory management. An NDResource interface was
created to combine the concepts of managed memory that was used in NDArray and
NDList. It could also be used in more classes in the future. This includes the
getManager, attach, and detach.

Within the NDManager, it gains a second "management convention". The first
convention of normal resources are added to the manager and then closed when the
manager closes. This works for small numbers of things on the NDArray, but not
when operations transitively create. So, the second convention is a
tempResource. Instead of freeing them when the manager is closed, they are
returned to their original manager. This is used to create a temporary scope, do
operations within it, and then the inputs and return value are returned to the
parent while the intermediate work is cleaned. This also matches the concepts of
ownership/borrowing as well.

Using these, a few additional helper methods were created. There is
`NDManager.from(resource)` to ease creation of managers based on a resource.
There is also `scopeManager.ret(returnValue)` to help with returning values
outside of the scopeManager. Lastly, there is a `scopeManager.{temp,}AttachAll`
to attach a number of resources to a manager within a single call.

Using these improvements, the new method were applied to the old locations where
MemoryScope was used as well as an additional case in NDManagerEx.

Also, the old attach methods were altered to be `void`. Because the return
values are no longer used anywhere and are not as necessary in the current
scheme, I figured it would simplify things. It also helps for things like
`NDList.attach` which does not have a single original NDManager when attaching.

Change-Id: I91d109cd14d70fa64fd8fffa0b50d88ab053013e

* Remove erroneous random forest application (#726)

The application was changed to the more accurate softmax_regression (matching
the terminology from the D2L book).

Change-Id: I1f69f005bbe38b125f2709c2988d06c14eebb765

* Minor fixes on duplicated code (#736)

* remove methods that already defined in the NDArrayAdapter

Change-Id: I01cc03a7f5b427bf31c6b3fd8d2136f2a27fe93b

* refactor toString

Change-Id: Iea22b16e1daa9f759b55c1a8b8b85536482e551a

* remove sparse NDArray

Change-Id: Icb44096519775f54cb32cc768c14f49e33dc7ea5

* fix test

Change-Id: Icef580ed77e7bba22864ce44577de3cba51e3e41

Co-authored-by: Jake Lee <gstu1130@gmail.com>
Co-authored-by: Lanking <lanking520@live.com>
Co-authored-by: aksrajvanshi <aksrajvanshi@gmail.com>
Co-authored-by: Frank Liu <frankfliu2000@gmail.com>
Co-authored-by: Zach Kimberg <kimbergz@amazon.com>
Lokiiiiii pushed a commit to Lokiiiiii/djl that referenced this pull request Oct 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants