Releases: pytorch/ignite
New metrics, extended DDP support and bug fixes
PyTorch-Ignite 0.4.5 - Release Notes
New Features
Metrics
- Added BLEU metric (#1834)
- Added ROUGE metric (#1772)
- Added MultiLabelConfusionMatrix metric (#1613)
- Added Cohen Kappa metric (#1690)
- Extended
sync_all_reduce
API (#1823) - Made
EpochMetric
more generic by extending the list of valid types (#1748) - Fixed issue with metric's output device (#2062)
- Added support for list of tensors as metric input (#2055)
- Implemented Jaccard Index shortcut for metrics (#1682)
- Updated Loss metric to use
required_output_keys
(#2027) - Added classification report metric (#1887)
- Added output detach for Canberra metric (#1820)
- Improved ROC AUC (#1762)
- Improved AveragePrecision metric and tests (#1756)
- Uniformly handling of metric types for all loggers (#2021)
- More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)
Engine
- Added native
torch.cuda.amp
andapex
automatic mixed precision forcreate_supervised_trainer
andcreate_supervised_evaluator
(#1714, #1589) - Updated
state.batch/state.output
lifespan in Engine (#1919)
Distributed module
- Handled IterableDataset with
auto_dataloader
(#2028) - Updated Loss metric to use
required_output_keys
(#2027) - Enabled gpu support for gloo backend (#2016)
- Added
safe_mode
foridist
broadcast (#1839) - Improved
idist
to support differentinit_methods
(#1767)
Other improvements
- Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
- Moved param handler to core (#1988)
- Added an option to store
EpochOutputStore
data onengine.state
, moved to core (#1982, #1974) - Set seed for xla in
ignite.utils.manual_seed
(#1970) - Fixed case for Precision/Recall in
multi_label
, not averaged configuration for DDP (#1646) - Updated
PolyaxonLogger
to handle v1 and v0 (#1625) - Added Arguments
*args
,**kwargs
toBaseLogger.attach method
(#2034) - Enabled metric ordering on
ProgressBar
(#1937) - Updated wandb logger (#1896)
- Fixed type hint for
ProgressBar
(#2079)
Bug fixes
- BC-breaking: Improved loggers to keep configuration (#1945)
- Fixed warnings in CI (#2023)
- Fixed Precision for all zero predictions (#2017)
- Renamed the default logger (#2006)
- Fixed Accumulation metric with Nvidia/Apex (#1978)
- Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
- Updated
nltk-smooth2
for BLEU metric (#1911) - Added full read permissions to saved file (1876) (#1880)
- Fixed a bug with horovod
_do_manual_all_reduce
(#1848) - Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
- Fixed f-string in
mnist_save_resume_engine.py
example (#2077) - Fixed an issue when rng states accidentaly on cuda for
DeterministicEngine
(#2081)
Housekeeping
A lot of PRs
- Test improvements (#2061, #2057, #2047, #1962, #1957, #1946, #1943, #1928, #1927, #1915, #1914, #1908, #1906, #1905, #1903, #1902, #1899, #1899, #1882, #1870, #1866, #1860, #1846, #1832, #1828, #1821, #1816, #1815, #1814, #1812, #1811, #1809, #1808, #1807, #1804, #1802, #1801, #1799, #1798, #1797, #1796, #1795, #1793, #1791, #1785, #1784, #1783, #1781, #1776, #1774, #1769, #1768, #1760, #1755, #1746, #1741, #1718, #1717, #1713, #1631)
- Documentation improvements and updates (#2058, #2024, #2005, #2003, #2001, #1993, #1990, #1933, #1893, #1849, #1780, #1770, #1727, #1726, #1722, #1686, #1685, #1672, #1671, #1661)
- Example improvements (#1924, #1918, #1890, #1827, #1771, #1669, #1658, #1656, #1652, #1642, #1633, #1632)
- CI updates (#2075, #2070, #2069, #2068, #2067, #2064, #2044, #2039, #2037, #2023, #1985, #1979, #1940, #1907, #1892, #1888, #1878, #1877, #1873, #1867, #1861, #1847, #1841, #1838, #1837, #1835, #1831, #1818, #1773, #1764, #1761, #1759, #1752, #1745, #1743, #1742, #1739, #1738, #1736, #1724, #1706, #1705, #1667, #1664, #1647)
- Code style improvements (#2050, #2014, #1817, #1749, #1747, #1740, #1734, #1732, #1731, #1707, #1703)
- Added docker image test script (#1733)
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff
Bug fixes and docs improvements
PyTorch-Ignite 0.4.4 - Release Notes
Bug fixes:
- BC-breaking Moved detach outside of loss function computation (#1675, #1692)
- Added eps to avoid nans in canberra error (#1699)
- Removed size limitation for str on collective ops (#1702)
- Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)
Doc improvements
Other improvements
- Fixed artifacts urls for pypi (#1629)
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff
New features, better docs, dropped python 3.5
PyTorch-Ignite 0.4.3 - Release Notes
🎉 Since september we have a new logo (#1324) 🎉
Core
Metrics
- [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
- Fixes BC if custom metric returns a dict (#1478)
- Added PSNR metric (#1570, #1595)
Handlers
- Checkpoint can save model with same filename (#1423)
- Add
greater_or_equal
option to Checkpoint handler (#1597) - Update handlers to use setup_logger (#1617)
- Added TimeLimit handler (#1611)
Distributed helper module
- Distributed cpu tests on windows (#1429)
- Added kwargs to idist.auto_model (#1552)
- Improved horovod initializer (#1559)
Others
- Dropped python 3.5 support (#1500)
- Added
torch.cuda.manual_seed_all
toignite.utils.manual_seed
(#1444) - Fixed
to_onehot
function to be torch scriptable (#1592) - Introduced standard stream for logger setup helper (#1601)
Docker images
- Removed Entrypoint from Dockerfile and images (#1475)
Examples
- Added [Cifar10 QAT example](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10_qat (#1556)
Contrib
Metrics
- Improved Canberra metric for DDP (#1314)
- Improve ManhattanDistance metric for DDP (#1320)
- Improve R2Score metric for DDP (#1318)
Handlers
- Added new time profiler
HandlersTimeProfiler
which allows per handler time profiling (#1398, #1474) - Fixed
attach_opt_params_handler
to returnRemovableEventHandle
(#1502) - Renamed
TrainsLogger
toClearMLLogger
keeping BC (#1557, #1560)
Documentation improvements
- #1330, #1337, #1338, #1353, #1360, #1374, #1373, #1394, #1393, #1401, #1435, #1460, #1461, #1465, #1536, #1542 ...
- Update Shpinx to v3.2.1. (#1356, #1372)
Codebase is MyPy checked
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn
Improved distributed support (horovod framework, epoch-wise metrics, etc), new metrics/handlers, bug fixes and pre-built docker images.
PyTorch-Ignite 0.4.2 - Release Notes
Core
New Features and bug fixes
-
Added SSIM metric (#1217)
-
Added prebuilt Docker images (#1218)
-
Added distributed support for
EpochMetric
and related metrics (#1229) -
Added
required_output_keys
public attribute (#1291) -
Pre-built docker images for computer vision and nlp tasks
powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )
Handlers and utils
- Allow passing keyword arguments to save function on
Checkpoint
(#1245)
Distributed helper module
- Added support of Horovod (#1195)
- Added
idist.broadcast
(#1237) - Added
sync_bn
option toidist.auto_model
(#1265)
Contrib
New Features and bug fixes
- Added
EpochOutputStore
handler (#1226) - Improved displayed tag for tqdm progress bar (#1279)
- Fixed bug with
ParamGroupScheduler
with schedulers based on different optimizers (#1274)
And a lot of house-keeping Pre-September Hacktoberfest contributions
- Added initial Mypy check at CI step (#1296)
- Fixed typo in docs (concepts) (#1295)
- Fixed link to pytorch documents (#1294)
- Removed prints from tests (#1292)
- Downgraded tqdm version to stabilize the CI (#1293)
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,
Bugfixes and updates
PyTorch-Ignite 0.4.1 - Release Notes
Core
New Features and bug fixes
- Improved docs for custom events (#1179)
Handlers and utils
- Added custom filename pattern for saving checkpoints (#1127)
Distributed helper module
- Improved namings in _XlaDistModel (#1173)
- Minor optimization for
idist.get_*
methods (#1196) - Fixed distributed proxy sampler runtime error (#1192)
- Fixes bug using
idist
with "nccl" backend and torch cuda is not available (#1166) - Fixed issue with logging XLA tensors (#1207)
Contrib
New Features and bug fixes
- Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
- Improved usage of contrib common methods with other save handlers (#1171)
Examples
- Improved Pascal Voc example (#1193)
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5
Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices
PyTorch-Ignite 0.4.0 - Release Notes
Core
BC breaking changes
- Simplified engine - BC breaking change (#940 #939 #938)
- no more internal patching of torch DataLoader.
- seed argument of
Engine.run
is deprecated. - previous behaviour can be achieved with
DeterministicEngine
, introduced in #939.
- Make all
Events
beCallableEventsWithFilter
(#788). - Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
- ignite is tested on the latest and nightly versions of pytorch.
- exact compatibility with previous versions can be checked here.
- Remove deprecated arguments from
BaseLogger
(#1051). - Deprecated
CustomPeriodicEvent
(#984). RunningAverage
now computes output quantity average instead of a sum in DDP (#991).- Checkpoint stores now files with
.pt
extension instead of.pth
(#873). - Arguments
archived
ofCheckpoint
andModelCheckpoint
are deprecated (#873). - Now
create_supervised_trainer
andcreate_supervised_evaluator
do not move model to device (#910).
See also migration note for details on how to update your code.
New Features and bug fixes
Ignite Distributed [Experimental]
- Introduction of
ignite.distributed as idist
module (#1045)- common interface for distributed applications and helper methods, e.g.
get_world_size()
,get_rank()
, ... - supports native torch distributed configuration, XLA devices.
- metrics computation works in all supported distributed configurations: GPUs and TPUs.
Parallel
utility andauto
module (#1014).
- common interface for distributed applications and helper methods, e.g.
Engine & Events
- Add flexibility on event handlers by packing triggering events (#868).
Engine
argument is now optional in event handlers (#889, #919).- We initialize
engine.state
before callingengine.run
(#1028). Engine
can run on dataloader based onIterableDataset
and without specifyingepoch_length
(#1077).- Added user keys into Engine's state dict (#914).
- Bug fixes in
Engine
class (#1048, #994). - Now
epoch_length
argument is optional (#985)- suitable to work with finite-unknown-length iterators.
- Added times in
engine.state
(#958).
Metrics
- Add
Frequency
metric for ops/s calculations (#760, #783, #976). - Metrics computation can be customized with introduced
MetricUsage
(#979, #1054)- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
Metric
can be detached (#827).- Fixed bug in
RunningAverage
when output is torch tensor (#943). - Improved computation performance of
EpochMetric
(#967). - Fixed average recall value of
ConfusionMatrix
(#846). - Now metrics can be serialized using
dill
(#930). - Added support for nested metric values (#968).
Handlers and utils
- Checkpoint : improved filename when score value is Integer (#758).
- Checkpoint : fix returning worst model of the saved models. (#745).
- Checkpoint :
load_objects
can load single object checkpoints (#772). - Checkpoint : we now save only one checkpoint per priority (#847).
- Checkpoint : added kwargs to
Checkpoint.load_objects
(#861). - Checkpoint : now saves
model.module.state_dict()
for DDP and DP (#1086). - Checkpoint and related: other improvements (#937).
- Checkpoint and EarlyStopping become stateful (#1156)
- Support namedtuple for
convert_tensor
(#740). - Added decorator
one_rank_only
(#882). - Update
common.py
(#904).
Contrib
- Added
FastaiLRFinder
(#596).
Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (#875).
Parameters scheduling
- Enabled multi params group for
LRScheduler
(#1027). - Parameters scheduling improvements (#1072, #859).
- Parameters scheduler can work on torch optimizer and any object with attribute
param_groups
(#1163).
Support of experiment tracking systems
- Add
NeptuneLogger
(#730, #821, #951, #954). - Add
TrainsLogger
(#1020, #1036, #1043). - Add
WandbLogger
(#926). - Added
visdom_logger
to common module (#796). - TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
- Simplified
BaseLogger
attach APIs (#1006). - Added kwargs to loggers' constructors and respective setup functions (#1015).
Time profiling
- Added basic time profiler to
contrib.handlers
(#729).
Bug fixes (some of PRs)
ProgressBar
output not in sync with epoch counts (#773).- Fixed
ProgressBar.log_message
(#768). Progressbar
now accounts forepoch_length
argument (#785).- Fixed broken
ProgressBar
if data is iterator without epoch length (#995). - Improved
setup_logger
for multiple calls (#962). - Fixed incorrect log position (#1099).
- Added missing colon to logging message (#1101).
- Fixed order of checkpoint saving and candidate removal (#1117)
Examples
- Basic example of
FastaiLRFinder
on MNIST (#838). - CycleGAN auto-mixed precision training example with NVidia/Apex or native
torch.cuda.amp
(#888). - Added
setup_logger
to mnist examples (#953). - Added MNIST example on TPU (#956).
- Benchmark amp on Cifar100 (#917).
- Updated ImageNet and Pascal VOC12 examples (#1125 #1138)
Housekeeping
- Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092, ...).
- Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093, #1113, ...).
- Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058, ...).
- Added
Serializable
in mixins (#1000). - Merge of
EpochMetric
in_BaseRegressionEpoch
(#970). - Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
- Drop Python 2 support finalized (#806).
- Splits engine into multiple parts (#724).
- Add Python 3.8 to Conda builds (#781).
- Black formatted codebase with pre-commit files (#792).
- Activate dpl v2 for Travis CI (#804).
- AutoPEP8 (#805).
- Fixed device conversion method (#887).
- Refactored deps installation (#931).
- Return handler in helpers (#997).
- Fixes #833 (#1001).
- Disable propagation of loggers to ancestrors (#1013).
- Consistent PEP8-compliant imports layout (#901).
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices
PyTorch-Ignite 0.4.0 RC - Release Notes
Core
BC breaking changes
- Simplified engine - BC breaking change (#940 #939 #938)
- no more internal patching of torch DataLoader.
- seed argument of
Engine.run
is deprecated. - previous behaviour can be achieved with
DeterministicEngine
, introduced in #939.
- Make all
Events
beCallableEventsWithFilter
(#788). - Make ignite compatible only with pytorch >1.0 (#1016).
- ignite is tested on the latest and nightly versions of pytorch.
- exact compatibility with previous versions can be checked here.
- Remove deprecated arguments from
BaseLogger
(#1051). - Deprecated
CustomPeriodicEvent
(#984). RunningAverage
now computes output quantity average instead of a sum in DDP (#991).- Checkpoint stores now files with
.pt
extension instead of.pth
(#873). - Arguments
archived
ofCheckpoint
andModelCheckpoint
are deprecated (#873). - Now
create_supervised_trainer
andcreate_supervised_evaluator
do not move model to device (#910).
New Features and bug fixes
Ignite Distributed [Experimental]
- Introduction of
ignite.distributed as idist
module (#1045)- common interface for distributed applications and helper methods, e.g.
get_world_size()
,get_rank()
, ... - supports native torch distributed configuration, XLA devices.
- metrics computation works in all supported distributed configurations: GPUs and TPUs.
- common interface for distributed applications and helper methods, e.g.
Engine & Events
- Add flexibility on event handlers by packing triggering events (#868).
Engine
argument is now optional in event handlers (#889, #919).- We initialize
engine.state
before callingengine.run
(#1028). Engine
can run on dataloader based onIterableDataset
and without specifyingepoch_length
(#1077).- Added user keys into Engine's state dict (#914).
- Bug fixes in
Engine
class (#1048, #994). - Now
epoch_length
argument is optional (#985)- suitable to work with finite-unknown-length iterators.
- Added times in
engine.state
(#958).
Metrics
- Add
Frequency
metric for ops/s calculations (#760, #783, #976). - Metrics computation can be customized with introduced
MetricUsage
(#979, #1054)- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
Metric
can be detached (#827).- Fixed bug in
RunningAverage
when output is torch tensor (#943). - Improved computation performance of
EpochMetric
(#967). - Fixed average recall value of
ConfusionMatrix
(#846). - Now metrics can be serialized using
dill
(#930). - Added support for nested metric values (#968).
Handlers and utils
- Checkpoint : improved filename when score value is Integer (#758).
- Checkpoint : fix returning worst model of the saved models. (#745).
- Checkpoint :
load_objects
can load single object checkpoints (#772). - Checkpoint : we now save only one checkpoint per priority (#847).
- Checkpoint : added kwargs to
Checkpoint.load_objects
(#861). - Checkpoint : now saves
model.module.state_dict()
for DDP and DP (#1086). - Checkpoint and related: other improvements (#937).
- Support namedtuple for
convert_tensor
(#740). - Added decorator
one_rank_only
(#882). - Update
common.py
(#904).
Contrib
- Added
FastaiLRFinder
(#596).
Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (#875).
Parameters scheduling
- Enabled multi params group for
LRScheduler
(#1027). - Parameters scheduling improvements (#1072, #859).
Support of experiment tracking systems
- Add
NeptuneLogger
(#730, #821, #951, #954). - Add
TrainsLogger
(#1020, #1036, #1043). - Add
WandbLogger
(#926). - Added
visdom_logger
to common module (#796). - TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
- Simplified
BaseLogger
attach APIs (#1006). - Added kwargs to loggers' constructors and respective setup functions (#1015).
Time profiling
- Added basic time profiler to
contrib.handlers
(#729).
Bug fixes (some of PRs)
ProgressBar
output not in sync with epoch counts (#773).- Fixed
ProgressBar.log_message
(#768). Progressbar
now accounts forepoch_length
argument (#785).- Fixed broken
ProgressBar
if data is iterator without epoch length (#995). - Improved
setup_logger
for multiple calls (#962). - Fixed incorrect log position (#1099).
- Added missing colon to logging message (#1101).
Examples
- Basic example of
FastaiLRFinder
on MNIST (#838). - CycleGAN auto-mixed precision training example with NVidia/Apex or native
torch.cuda.amp
(#888). - Added
setup_logger
to mnist examples (#953). - Added MNIST example on TPU (#956).
- Benchmark amp on Cifar100 (#917).
TrainsLogger
semantic segmentation example (#1095).
Housekeeping (some of PRs)
- Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092).
- Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093).
- Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058).
- Added
Serializable
in mixins (#1000). - Merge of
EpochMetric
in_BaseRegressionEpoch
(#970). - Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
- Drop Python 2 support finalized (#806).
- Dynamic typing (#723).
- Splits engine into multiple parts (#724).
- Add Python 3.8 to Conda builds (#781).
- Black formatted codebase with pre-commit files (#792).
- Activate dpl v2 for Travis CI (#804).
- AutoPEP8 (#805).
- Fixes nightly version bug (#809).
- Fixed device conversion method (#887).
- Refactored deps installation (#931).
- Return handler in helpers (#997).
- Fixes #833 (#1001).
- Disable propagation of loggers to ancestrors (#1013).
- Consistent PEP8-compliant imports layout (#901).
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
Bye-Bye Python 2.7, Welcome 3.8
Core
- Added State repr and input batch as engine.state.batch (#641)
- Adapted core metrics only to be used in distributed configuration (#635)
- Added fbeta metric as core metric (#653)
- Added event filtering feature (e.g. every/once/event filter logic) (#656)
- BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
- Added option
n_saved=None
to store all checkpoints (#703)
- Added option
- Improved accumulation metrics (#681)
- Early stopping min delta (#685)
- Droped Python 2.7 support (#699)
- Added feature: Metric can accept a dictionary (#689)
- Added Dice Coefficient metric (#680)
- Added helper method to simplify the setup of class loggers (#712)
Engine refactoring (BC breaking change)
Finally solved the issue #62 to resume training from an epoch or iteration
- Engine refactoring + features (#640)
- engine checkpointing
- variable epoch lenght defined by
epoch_length
- two additional events:
GET_BATCH_STARTED
andGET_BATCH_COMPLETED
- cifar10 example with save/resume in distributed conf
Contrib
- Improved
create_lr_scheduler_with_warmup
(#646) - Added helper method to plot param scheduler values with matplotlib (#650)
- BC Breaking change: with multiple optimizer's param groups (#690)
- Added state_dict/load_state_dict (#690)
- BC Breaking change: Let the user specify tqdm parameters for log_message (#695)
Examples
- Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
- Added CIFAR10 distributed example
Reproducible trainings as "References"
Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:
Features:
- Distributed training with mixed precision by nvidia/apex
- Experiments tracking with MLflow or Polyaxon
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
New features and bug fixes
Core
Various improvements in the core part of the library:
-
Add
epoch_bound
parameter toRunningAverage
(#488) -
Bug fixes with Confusion matrix, new implementation (#572) - BC breaking
-
Added
event_to_attr
in register_events (#523) -
Added accumulative single variable metrics (#524)
-
should_terminate
is reset between runs (#525) -
to_onehot
returns tensor with uint8 dtype (#571) - may be BC breaking -
Removable handle returned from
Engine.add_event_handler()
to enable single-shot events (#588) -
New documentation style 🎉
Distributed
We removed mnist distrib example as being misleading and provided distrib branch(XX/YY/2020: distrib
branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.
Now in Contributions module
- Added mlflow logger (#558)
- R-Squared Metric in regression metrics module (#496)
- Add tag field to OptimizerParamsHandler (#502)
- Improved ProgressBar with TerminateOnNan (#506)
- Support for layer freezing with Tensorboard integration (#515)
- Improved OutputHandler API (#531)
- Improved create_lr_scheduler_with_warmup (#556)
- Added "all" option to metric_names in contrib loggers (#565)
- Added GPU usage info as metric (#569)
- Other bug fixes
Notebook examples
- Added Cycle-GAN notebook (#500)
- Finetune EfficientNet-B0 on CIFAR100 (#544)
- Added Fashion MNIST jupyter notebook (#549)
Updated nighlty builds
From pip:
pip install --pre pytorch-ignite
From conda (this suggests to install pytorch nightly release instead of stable version as dependency):
conda install ignite -c pytorch-nightly
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey
New features and enhanced contrib module
Core
-
We removed deprecated metric classes
BinaryAccuracy
andCategoricalAccuracy
and which are replaced byAccuracy
. -
Multilabel option for
Accuracy
,Precision
,Recall
metrics. -
Added other metrics:
-
Operations on metrics:
p = Precision(average=False)
- apply PyTorch operators:
mean_precision = p.mean()
- indexing:
precision_no_bg = p[1:]
- apply PyTorch operators:
-
Improved our docs with more examples.
-
Added FAQ section with best practices.
-
Bug fixes
Now in Contributions module
- added
TensorboardLogger
- added
VisdomLogger
- added
PolyaxonLogger
- improved
ProgressBar
- New regression metrics
- Median Absolute Error
- Median Relative Absolute Error
- Median Absolute Percentage Error
- Geometric Mean Relative Absolute Error
- Canberra Metric
- Fractional Absolute Error
- Wave Hedges Distance
- Geometric Mean Absolute Error
- added new parameter scheduling classes and improved parameters:
- PiecewiseLinear
- LRScheduler
- other helper methods
- added custom events support:
CustomPeriodicEvent
Notebook examples
- VAE on MNIST
- CNN for text classification
Nighlty builds with pytorch-nightly as dependency
We also provide pip/conda
nighlty builds with pytorch-nightly
as dependency:
pip install pytorch-ignite-nightly
or
conda install -c pytorch ignite-nightly
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou
vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !