Skip to content

Releases: pytorch/ignite

New metrics, extended DDP support and bug fixes

24 Jun 22:50
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.5 - Release Notes

New Features

Metrics

  • Added BLEU metric (#1834)
  • Added ROUGE metric (#1772)
  • Added MultiLabelConfusionMatrix metric (#1613)
  • Added Cohen Kappa metric (#1690)
  • Extended sync_all_reduce API (#1823)
  • Made EpochMetric more generic by extending the list of valid types (#1748)
  • Fixed issue with metric's output device (#2062)
  • Added support for list of tensors as metric input (#2055)
  • Implemented Jaccard Index shortcut for metrics (#1682)
  • Updated Loss metric to use required_output_keys (#2027)
  • Added classification report metric (#1887)
  • Added output detach for Canberra metric (#1820)
  • Improved ROC AUC (#1762)
  • Improved AveragePrecision metric and tests (#1756)
  • Uniformly handling of metric types for all loggers (#2021)
  • More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)

Engine

  • Added native torch.cuda.amp and apex automatic mixed precision for create_supervised_trainer and create_supervised_evaluator (#1714, #1589)
  • Updated state.batch/state.output lifespan in Engine (#1919)

Distributed module

  • Handled IterableDataset with auto_dataloader (#2028)
  • Updated Loss metric to use required_output_keys (#2027)
  • Enabled gpu support for gloo backend (#2016)
  • Added safe_mode for idist broadcast (#1839)
  • Improved idist to support different init_methods (#1767)

Other improvements

  • Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
  • Moved param handler to core (#1988)
  • Added an option to store EpochOutputStore data on engine.state, moved to core (#1982, #1974)
  • Set seed for xla in ignite.utils.manual_seed (#1970)
  • Fixed case for Precision/Recall in multi_label, not averaged configuration for DDP (#1646)
  • Updated PolyaxonLogger to handle v1 and v0 (#1625)
  • Added Arguments *args, **kwargs to BaseLogger.attach method (#2034)
  • Enabled metric ordering on ProgressBar (#1937)
  • Updated wandb logger (#1896)
  • Fixed type hint for ProgressBar (#2079)

Bug fixes

  • BC-breaking: Improved loggers to keep configuration (#1945)
  • Fixed warnings in CI (#2023)
  • Fixed Precision for all zero predictions (#2017)
  • Renamed the default logger (#2006)
  • Fixed Accumulation metric with Nvidia/Apex (#1978)
  • Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
  • Updated nltk-smooth2 for BLEU metric (#1911)
  • Added full read permissions to saved file (1876) (#1880)
  • Fixed a bug with horovod _do_manual_all_reduce (#1848)
  • Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
  • Fixed f-string in mnist_save_resume_engine.py example (#2077)
  • Fixed an issue when rng states accidentaly on cuda for DeterministicEngine (#2081)

Housekeeping

A lot of PRs

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff

Bug fixes and docs improvements

03 Mar 23:04
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.4 - Release Notes

Bug fixes:

  • BC-breaking Moved detach outside of loss function computation (#1675, #1692)
  • Added eps to avoid nans in canberra error (#1699)
  • Removed size limitation for str on collective ops (#1702)
  • Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)

Doc improvements

Other improvements

  • Fixed artifacts urls for pypi (#1629)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff

New features, better docs, dropped python 3.5

07 Feb 23:13
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.3 - Release Notes

🎉 Since september we have a new logo (#1324) 🎉

Core

Metrics

  • [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
  • Fixes BC if custom metric returns a dict (#1478)
  • Added PSNR metric (#1570, #1595)

Handlers

  • Checkpoint can save model with same filename (#1423)
  • Add greater_or_equal option to Checkpoint handler (#1597)
  • Update handlers to use setup_logger (#1617)
  • Added TimeLimit handler (#1611)

Distributed helper module

  • Distributed cpu tests on windows (#1429)
  • Added kwargs to idist.auto_model (#1552)
  • Improved horovod initializer (#1559)

Others

  • Dropped python 3.5 support (#1500)
  • Added torch.cuda.manual_seed_all to ignite.utils.manual_seed (#1444)
  • Fixed to_onehot function to be torch scriptable (#1592)
  • Introduced standard stream for logger setup helper (#1601)

Docker images

  • Removed Entrypoint from Dockerfile and images (#1475)

Examples

Contrib

Metrics

  • Improved Canberra metric for DDP (#1314)
  • Improve ManhattanDistance metric for DDP (#1320)
  • Improve R2Score metric for DDP (#1318)

Handlers

  • Added new time profiler HandlersTimeProfiler which allows per handler time profiling (#1398, #1474)
  • Fixed attach_opt_params_handler to return RemovableEventHandle (#1502)
  • Renamed TrainsLogger to ClearMLLogger keeping BC (#1557, #1560)

Documentation improvements

Codebase is MyPy checked


Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn

Improved distributed support (horovod framework, epoch-wise metrics, etc), new metrics/handlers, bug fixes and pre-built docker images.

20 Sep 19:12
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.2 - Release Notes

Core

New Features and bug fixes

  • Added SSIM metric (#1217)

  • Added prebuilt Docker images (#1218)

  • Added distributed support for EpochMetric and related metrics (#1229)

  • Added required_output_keys public attribute (#1291)

  • Pre-built docker images for computer vision and nlp tasks
    powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )

Handlers and utils

  • Allow passing keyword arguments to save function on Checkpoint (#1245)

Distributed helper module

  • Added support of Horovod (#1195)
  • Added idist.broadcast (#1237)
  • Added sync_bn option to idist.auto_model (#1265)

Contrib

New Features and bug fixes

  • Added EpochOutputStore handler (#1226)
  • Improved displayed tag for tqdm progress bar (#1279)
  • Fixed bug with ParamGroupScheduler with schedulers based on different optimizers (#1274)

And a lot of house-keeping Pre-September Hacktoberfest contributions

  • Added initial Mypy check at CI step (#1296)
  • Fixed typo in docs (concepts) (#1295)
  • Fixed link to pytorch documents (#1294)
  • Removed prints from tests (#1292)
  • Downgraded tqdm version to stabilize the CI (#1293)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,

Bugfixes and updates

23 Jul 07:42
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.1 - Release Notes

Core

New Features and bug fixes

  • Improved docs for custom events (#1179)

Handlers and utils

  • Added custom filename pattern for saving checkpoints (#1127)

Distributed helper module

  • Improved namings in _XlaDistModel (#1173)
  • Minor optimization for idist.get_* methods (#1196)
  • Fixed distributed proxy sampler runtime error (#1192)
  • Fixes bug using idist with "nccl" backend and torch cuda is not available (#1166)
  • Fixed issue with logging XLA tensors (#1207)

Contrib

New Features and bug fixes

  • Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
  • Improved usage of contrib common methods with other save handlers (#1171)

Examples

  • Improved Pascal Voc example (#1193)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5

Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices

26 Jun 00:33
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.0 - Release Notes

Core

BC breaking changes

  • Simplified engine - BC breaking change (#940 #939 #938)
    • no more internal patching of torch DataLoader.
    • seed argument of Engine.run is deprecated.
    • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
  • Make all Events be CallableEventsWithFilter (#788).
  • Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
    • ignite is tested on the latest and nightly versions of pytorch.
    • exact compatibility with previous versions can be checked here.
  • Remove deprecated arguments from BaseLogger (#1051).
  • Deprecated CustomPeriodicEvent (#984).
  • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
  • Checkpoint stores now files with .pt extension instead of .pth (#873).
  • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
  • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

See also migration note for details on how to update your code.

New Features and bug fixes

Ignite Distributed [Experimental]

  • Introduction of ignite.distributed as idist module (#1045)
    • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
    • supports native torch distributed configuration, XLA devices.
    • metrics computation works in all supported distributed configurations: GPUs and TPUs.
    • Parallel utility and auto module (#1014).

Engine & Events

  • Add flexibility on event handlers by packing triggering events (#868).
  • Engine argument is now optional in event handlers (#889, #919).
  • We initialize engine.state before calling engine.run (#1028).
  • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
  • Added user keys into Engine's state dict (#914).
  • Bug fixes in Engine class (#1048, #994).
  • Now epoch_length argument is optional (#985)
    • suitable to work with finite-unknown-length iterators.
  • Added times in engine.state (#958).

Metrics

  • Add Frequency metric for ops/s calculations (#760, #783, #976).
  • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
    • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
  • Metric can be detached (#827).
  • Fixed bug in RunningAverage when output is torch tensor (#943).
  • Improved computation performance of EpochMetric (#967).
  • Fixed average recall value of ConfusionMatrix (#846).
  • Now metrics can be serialized using dill (#930).
  • Added support for nested metric values (#968).

Handlers and utils

  • Checkpoint : improved filename when score value is Integer (#758).
  • Checkpoint : fix returning worst model of the saved models. (#745).
  • Checkpoint : load_objects can load single object checkpoints (#772).
  • Checkpoint : we now save only one checkpoint per priority (#847).
  • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
  • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
  • Checkpoint and related: other improvements (#937).
  • Checkpoint and EarlyStopping become stateful (#1156)
  • Support namedtuple for convert_tensor (#740).
  • Added decorator one_rank_only (#882).
  • Update common.py (#904).

Contrib

  • Added FastaiLRFinder (#596).

Metrics

  • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

Parameters scheduling

  • Enabled multi params group for LRScheduler (#1027).
  • Parameters scheduling improvements (#1072, #859).
  • Parameters scheduler can work on torch optimizer and any object with attribute param_groups (#1163).

Support of experiment tracking systems

  • Add NeptuneLogger (#730, #821, #951, #954).
  • Add TrainsLogger (#1020, #1036, #1043).
  • Add WandbLogger (#926).
  • Added visdom_logger to common module (#796).
  • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
  • Simplified BaseLogger attach APIs (#1006).
  • Added kwargs to loggers' constructors and respective setup functions (#1015).

Time profiling

  • Added basic time profiler to contrib.handlers (#729).

Bug fixes (some of PRs)

  • ProgressBar output not in sync with epoch counts (#773).
  • Fixed ProgressBar.log_message (#768).
  • Progressbar now accounts for epoch_length argument (#785).
  • Fixed broken ProgressBar if data is iterator without epoch length (#995).
  • Improved setup_logger for multiple calls (#962).
  • Fixed incorrect log position (#1099).
  • Added missing colon to logging message (#1101).
  • Fixed order of checkpoint saving and candidate removal (#1117)

Examples

  • Basic example of FastaiLRFinder on MNIST (#838).
  • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
  • Added setup_logger to mnist examples (#953).
  • Added MNIST example on TPU (#956).
  • Benchmark amp on Cifar100 (#917).
  • Updated ImageNet and Pascal VOC12 examples (#1125 #1138)

Housekeeping


Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices

06 Jun 20:06
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.0 RC - Release Notes

Core

BC breaking changes

  • Simplified engine - BC breaking change (#940 #939 #938)
    • no more internal patching of torch DataLoader.
    • seed argument of Engine.run is deprecated.
    • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
  • Make all Events be CallableEventsWithFilter (#788).
  • Make ignite compatible only with pytorch >1.0 (#1016).
    • ignite is tested on the latest and nightly versions of pytorch.
    • exact compatibility with previous versions can be checked here.
  • Remove deprecated arguments from BaseLogger (#1051).
  • Deprecated CustomPeriodicEvent (#984).
  • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
  • Checkpoint stores now files with .pt extension instead of .pth (#873).
  • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
  • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

New Features and bug fixes

Ignite Distributed [Experimental]

  • Introduction of ignite.distributed as idist module (#1045)
    • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
    • supports native torch distributed configuration, XLA devices.
    • metrics computation works in all supported distributed configurations: GPUs and TPUs.

Engine & Events

  • Add flexibility on event handlers by packing triggering events (#868).
  • Engine argument is now optional in event handlers (#889, #919).
  • We initialize engine.state before calling engine.run (#1028).
  • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
  • Added user keys into Engine's state dict (#914).
  • Bug fixes in Engine class (#1048, #994).
  • Now epoch_length argument is optional (#985)
    • suitable to work with finite-unknown-length iterators.
  • Added times in engine.state (#958).

Metrics

  • Add Frequency metric for ops/s calculations (#760, #783, #976).
  • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
    • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
  • Metric can be detached (#827).
  • Fixed bug in RunningAverage when output is torch tensor (#943).
  • Improved computation performance of EpochMetric (#967).
  • Fixed average recall value of ConfusionMatrix (#846).
  • Now metrics can be serialized using dill (#930).
  • Added support for nested metric values (#968).

Handlers and utils

  • Checkpoint : improved filename when score value is Integer (#758).
  • Checkpoint : fix returning worst model of the saved models. (#745).
  • Checkpoint : load_objects can load single object checkpoints (#772).
  • Checkpoint : we now save only one checkpoint per priority (#847).
  • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
  • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
  • Checkpoint and related: other improvements (#937).
  • Support namedtuple for convert_tensor (#740).
  • Added decorator one_rank_only (#882).
  • Update common.py (#904).

Contrib

  • Added FastaiLRFinder (#596).

Metrics

  • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

Parameters scheduling

  • Enabled multi params group for LRScheduler (#1027).
  • Parameters scheduling improvements (#1072, #859).

Support of experiment tracking systems

  • Add NeptuneLogger (#730, #821, #951, #954).
  • Add TrainsLogger (#1020, #1036, #1043).
  • Add WandbLogger (#926).
  • Added visdom_logger to common module (#796).
  • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
  • Simplified BaseLogger attach APIs (#1006).
  • Added kwargs to loggers' constructors and respective setup functions (#1015).

Time profiling

  • Added basic time profiler to contrib.handlers (#729).

Bug fixes (some of PRs)

  • ProgressBar output not in sync with epoch counts (#773).
  • Fixed ProgressBar.log_message (#768).
  • Progressbar now accounts for epoch_length argument (#785).
  • Fixed broken ProgressBar if data is iterator without epoch length (#995).
  • Improved setup_logger for multiple calls (#962).
  • Fixed incorrect log position (#1099).
  • Added missing colon to logging message (#1101).

Examples

  • Basic example of FastaiLRFinder on MNIST (#838).
  • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
  • Added setup_logger to mnist examples (#953).
  • Added MNIST example on TPU (#956).
  • Benchmark amp on Cifar100 (#917).
  • TrainsLogger semantic segmentation example (#1095).

Housekeeping (some of PRs)


Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

Bye-Bye Python 2.7, Welcome 3.8

21 Jan 23:26
ebd1876
Compare
Choose a tag to compare

Core

  • Added State repr and input batch as engine.state.batch (#641)
  • Adapted core metrics only to be used in distributed configuration (#635)
  • Added fbeta metric as core metric (#653)
  • Added event filtering feature (e.g. every/once/event filter logic) (#656)
  • BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
    • Added option n_saved=None to store all checkpoints (#703)
  • Improved accumulation metrics (#681)
  • Early stopping min delta (#685)
  • Droped Python 2.7 support (#699)
  • Added feature: Metric can accept a dictionary (#689)
  • Added Dice Coefficient metric (#680)
  • Added helper method to simplify the setup of class loggers (#712)

Engine refactoring (BC breaking change)

Finally solved the issue #62 to resume training from an epoch or iteration

  • Engine refactoring + features (#640)
    • engine checkpointing
    • variable epoch lenght defined by epoch_length
    • two additional events: GET_BATCH_STARTED and GET_BATCH_COMPLETED
    • cifar10 example with save/resume in distributed conf

Contrib

  • Improved create_lr_scheduler_with_warmup (#646)
  • Added helper method to plot param scheduler values with matplotlib (#650)
  • BC Breaking change: with multiple optimizer's param groups (#690)
    • Added state_dict/load_state_dict (#690)
  • BC Breaking change: Let the user specify tqdm parameters for log_message (#695)

Examples

  • Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
  • Added CIFAR10 distributed example

Reproducible trainings as "References"

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

Features:

  • Distributed training with mixed precision by nvidia/apex
  • Experiments tracking with MLflow or Polyaxon

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@anubhavashok, @kagrze, @maxfrei750, @vfdev-5

New features and bug fixes

03 Oct 22:44
Compare
Choose a tag to compare

Core

Various improvements in the core part of the library:

  • Add epoch_bound parameter to RunningAverage (#488)

  • Bug fixes with Confusion matrix, new implementation (#572) - BC breaking

  • Added event_to_attr in register_events (#523)

  • Added accumulative single variable metrics (#524)

  • should_terminate is reset between runs (#525)

  • to_onehot returns tensor with uint8 dtype (#571) - may be BC breaking

  • Removable handle returned from Engine.add_event_handler() to enable single-shot events (#588)

  • New documentation style 🎉

Distributed

We removed mnist distrib example as being misleading and provided distrib branch(XX/YY/2020: distrib branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

Now in Contributions module

  • Added mlflow logger (#558)
  • R-Squared Metric in regression metrics module (#496)
  • Add tag field to OptimizerParamsHandler (#502)
  • Improved ProgressBar with TerminateOnNan (#506)
  • Support for layer freezing with Tensorboard integration (#515)
  • Improved OutputHandler API (#531)
  • Improved create_lr_scheduler_with_warmup (#556)
  • Added "all" option to metric_names in contrib loggers (#565)
  • Added GPU usage info as metric (#569)
  • Other bug fixes

Notebook examples

  • Added Cycle-GAN notebook (#500)
  • Finetune EfficientNet-B0 on CIFAR100 (#544)
  • Added Fashion MNIST jupyter notebook (#549)

Updated nighlty builds

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey

New features and enhanced contrib module

09 Apr 16:05
38a4f37
Compare
Choose a tag to compare

Core

  • We removed deprecated metric classes BinaryAccuracy and CategoricalAccuracy and which are replaced by Accuracy.

  • Multilabel option for Accuracy, Precision, Recall metrics.

  • Added other metrics:

  • Operations on metrics: p = Precision(average=False)

    • apply PyTorch operators: mean_precision = p.mean()
    • indexing: precision_no_bg = p[1:]
  • Improved our docs with more examples.

  • Added FAQ section with best practices.

  • Bug fixes

Now in Contributions module

Notebook examples

  • VAE on MNIST
  • CNN for text classification

Nighlty builds with pytorch-nightly as dependency

We also provide pip/conda nighlty builds with pytorch-nightly as dependency:

pip install pytorch-ignite-nightly

or

conda install -c pytorch ignite-nightly 

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !