This repository has been archived by the owner on Jul 7, 2023. It is now read-only.
Releases: tensorflow/tensor2tensor
Releases · tensorflow/tensor2tensor
v1.15.7
- Multistep Adam Optimizer many thanks to @AgoloCuongHoang for contributing in #1773 !
- Residual Shuffle-Exchange Network thanks to @EmilsOzolins in #1805 !
- Not pinning the gym version.
v1.15.6
v1.15.5
v1.15.4
- Flush out some more contrib remnants.
v1.15.3
- Some changes to handle 1.x to 2.x for tf contrib
- TODO(afrozm): Write more
v1.15.2
Some changes needed to be able to import problems with TF 2.0
v1.15.1
- Move away from tf.flags to absl-py's flags.
- Move away from std::string to tensorflow::string
v1.15.0
Final T2T major release
It is now in maintenance mode — we keep it running and welcome bug-fixes, but encourage users to use the successor library Trax.
PRs Merged
- #1724 by @Separius - use batch_size in _test_img2img_transformer thanks!
- #1726 by @senarvi - Fix decoding in prepend mode thanks!
- #1733 by @prasastoadi - En-Id untokenized parallel corpora thanks!
- #1748 by @gabegrand adding a Text2RealProblem class -- thanks a lot @gabegrand
Bug Fixes
- Fix features and decoding on TPUs by @mts42000
- @iansimon and Kristy Choi around shape assertions and modalities
- @superbobry fixed cases where tf.TensorShape was constructed with float dimensions
Misc
- Trax was moved into its own repo: https://github.com/google/trax
v1.14.1
PRs Merged
- #1720 thanks @przemb
- #1698 #1699 test/util file fixes thanks to @Vooblin
- Fix serving response from Cloud ML Engine (#1688) thanks to @evalphobia
- Refine automatic mixed precision support via hyper param (#1681) thanks @vinhngx
- correct return shape of rel_pos2abs_pos() (#1686) thanks to @Separius
- save attention weights for relative attention v2 (#1682) thanks to @Ghostvv
- Update generator_utils.py (#1674) thanks to @TanguyUrvoy
Docs
- Transformer tutorial (#1675) many thanks to @Styleoshin
Problems
- 4 new dialog problems by @ricsinaruto in #1642
Models
- Extend NeuralStack to support Dequeu by reading/writing in both directions, thanks @narphorium
TRAX
- Lots of work on SimPLe tuning hyperparameters by @koz4k , @lukaszkaiser and @afrozenator
- async data collection for RL in TRAX
- New memory efficient Transformer using Reversible layers, thanks to Nikita Kitaev, @lukaszkaiser and Anselm Levskaya
- Losses and metrics are layers now in trax, thanks to @lukaszkaiser
- Activations in TRAX thanks to @joaogui1 in #1684 and #1666
v1.14.0
Models / Layers:
- NeuralStack and NeuralQueue added, in 838aca4 - thanks @narphorium !
- Open Sourcing the Search Space used in EvolvedTransformer - 4ce3661
- Masked local n-D attention added in - 2da59d2
Problems:
- Add English-Spanish translation problem (#1626) thanks @voluntadpear !
- MovingMNist added in 121ee60 thanks @MechCoder !
Bug Fixes:
- Loss twice multiplied with loss_coef (#1627) by @davidmrau - thanks a lot David!
- Fix log_prob accumulation during decoding, thanks @lmthang !
- Fixed high usage of TPU HBM "Arguments" during serving
in d38f343 thanks @ziy ! - Should not generate summary during decoding in dot_product_relative_atention (#1618) thanks @phamthuonghai !
Misc changes:
- Implement sequence packing as a tf.data.Dataset transformation - 560c008 thanks @robieta !
- Lots of work on t2t_distill and model exporting by @ziy - thanks @ziy !
RL:
Introduce Rainbow. (#1607) by @konradczechowski in #1607
Changes to MBRL by @konradczechowski , @koz4k in multiple PRs.
PRs:
- Adding automatic mixed precision support (#1637) thanks a lot to @vinhngx !
- Documentation for creating own model #1589 thanks @hbrylkowski !
- Adding extra linear to semantic hashing discretization bottleneck. #1578 thanks @martiansideofthemoon !
- Using partial targets at inference time. (#1596) thanks @EugKar !
- Updated link to DeepMind Math dataset (#1583) thanks @MaxSobolMark !
- Only strip end of line (#1577) thanks @funtion !
- correct typo in add_timing_signal_nd (#1651) many thanks to @Separius !
- fix decode bug (#1645) many thanks to @dong-s !
- Change confusing function name (#1669) thanks @lazylife7157 !
TRAX:
Base
- Forked optimizers from JAX and make them objects in 1c7c10c
- Trax layers are now stateful and support custom gradients.
- Multi-device capability added.
- Memory efficient trainer added in b2615aa ! Thanks Nikita Kitaev!
- Adafactor optimizer added in TRAX - 63c015f
- Demo Colab added in cec26db thanks @levskaya
- Demo colab for trax layers - 7632ed0
- Transformer, TransformerLM, Reversible Transformer, PositionLookupTransformer and Resnet50 are some of the models that TRAX now supports.