Skip to content

Commit

Permalink
removes styling of function descriptions as requested in #1256 (#1399)
Browse files Browse the repository at this point in the history
* removes styling of function descriptions as requested in #1256

* reverts modifications to the example files
  • Loading branch information
BanzaiTokyo authored Oct 25, 2020
1 parent 5bd0374 commit ddcce01
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions ignite/contrib/engines/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def setup_common_training_handlers(
**kwargs: Any
):
"""Helper method to setup trainer with common handlers (it also supports distributed configuration):
- :class:`~ignite.handlers.TerminateOnNan`
- handler to setup learning rate scheduling
- :class:`~ignite.handlers.ModelCheckpoint`
Expand Down Expand Up @@ -312,6 +313,7 @@ def setup_tb_logging(
**kwargs: Any
):
"""Method to setup TensorBoard logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -343,6 +345,7 @@ def setup_visdom_logging(
**kwargs: Any
):
"""Method to setup Visdom logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -373,6 +376,7 @@ def setup_mlflow_logging(
**kwargs: Any
):
"""Method to setup MLflow logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -403,6 +407,7 @@ def setup_neptune_logging(
**kwargs: Any
):
"""Method to setup Neptune logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -433,6 +438,7 @@ def setup_wandb_logging(
**kwargs: Any
):
"""Method to setup WandB logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -463,6 +469,7 @@ def setup_plx_logging(
**kwargs: Any
):
"""Method to setup Polyaxon logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down Expand Up @@ -493,6 +500,7 @@ def setup_trains_logging(
**kwargs: Any
):
"""Method to setup Trains logging on trainer and a list of evaluators. Logged metrics are:
- Training metrics, e.g. running average loss values
- Learning rate(s)
- Evaluation metrics
Expand Down

0 comments on commit ddcce01

Please sign in to comment.