Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump tensorflow-gpu from 1.15.0 to 2.4.0 #1025

Merged
merged 2 commits into from
Mar 21, 2021

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 19, 2021

Bumps tensorflow-gpu from 1.15.0 to 2.4.0.

Release notes

Sourced from tensorflow-gpu's releases.

TensorFlow 2.4.0

Release 2.4.0

Major Features and Improvements

  • tf.distribute introduces experimental support for asynchronous training of models via the tf.distribute.experimental.ParameterServerStrategy API. Please see the tutorial to learn more.

  • MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.

  • Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.

  • Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.

  • A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.

  • Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.

  • TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.

  • TFLite Profiler for Android is available. See the detailed guide to learn more.

  • TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.

Breaking Changes

  • TF Core:

    • Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops. TensorFloat-32 can be disabled by running tf.config.experimental.enable_tensor_float_32_execution(False).
    • The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of tensorflow::tstring/TF_TStrings.
    • C-API functions TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.
    • tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.
    • tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.
    • XLA:CPU and XLA:GPU devices are no longer registered by default. Use TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.
  • tf.keras:

    • The steps_per_execution argument in model.compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling model.fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.
    • A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
      • Code that uses isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.
      • Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using tensor.ref(), etc.) may break.
      • Code that uses full path for get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.
      • Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
      • Code that uses tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.
      • Code that directly asserts on a Keras symbolic value in cases where ops like tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.
      • Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
      • Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use GradientTape on the actual Tensors passed to the already-constructed model instead.
      • Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
      • Code that tries manually walking a tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.

... (truncated)

Changelog

Sourced from tensorflow-gpu's changelog.

Release 2.4.0

Major Features and Improvements

Breaking Changes

  • TF Core:
    • Certain float32 ops run in lower precision on Ampere based GPUs, including

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 19, 2021
@davidslater davidslater merged commit 24f40f5 into master Mar 21, 2021
@davidslater davidslater deleted the dependabot/pip/tensorflow-gpu-2.4.0 branch March 21, 2021 02:57
davidslater added a commit that referenced this pull request Mar 25, 2021
* Bump pillow from 7.1.2 to 8.1.1 (#1024)

Bumps [pillow](https://github.com/python-pillow/Pillow) from 7.1.2 to 8.1.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/master/CHANGES.rst)
- [Commits](python-pillow/Pillow@7.1.2...8.1.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump tensorflow-gpu from 1.15.0 to 2.4.0 (#1025)

* Bump tensorflow-gpu from 1.15.0 to 2.4.0

Bumps [tensorflow-gpu](https://github.com/tensorflow/tensorflow) from 1.15.0 to 2.4.0.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v1.15.0...v2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>

* Update host-requirements.txt

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: davidslater <david.slater@twosixlabs.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: davidslater <david.slater@twosixlabs.com>
lcadalzo added a commit that referenced this pull request Mar 25, 2021
* Run CI tests on PRs/commits to dev (#1000)

* add indexing to datasets (#1003)

* kwargs

* update slicing

* update data loader

* add tests

* neal change

* Docker updates (#1008)

* added object detection metrics

* WIP: update docker dependencies (#1006)

* updating docker dependencies

* removing poisoning images

* update deepspeech image dependencies

* update host-requirements.txt

* removing poisoning images from release.yml

* add coloredlogs

* update ART pip install command

* remove accidental new line character

* Docker build simplification (#1017)

* remove -dev

* update

* docker build

* distribute docker containers

* update dockerfiles and github workflows

* update docs

* add pytorch-deepspeech to images

* add dev tag

* update release

* update error handling

* remove comment

* remove unneeded command

* tool to update versions

* removing calls to ART's set_learning_phase()

* updating image version

* formatting

* apricot skip (#1020)

Co-authored-by: davidslater <david.slater@twosixlabs.com>

* Fix missing validation folder in whl (#972)

* Fix missing validation folder in whl

* Automatically find test folder relative to armory installation

* Ignore pytest cache warning

* Fix import, add warning filter

* Disable test caching

* skip misclassified examples (#1005)

* added object detection metrics

* adding proof of concept

* black formatting

* move cli/config check to base scenario before _evaluate()

* refactor image_classification skip_misclassified

* update __main__ with skip-misclassified

* record_metric_per_sample doesn't need to be true

* add skip_misclassified to so2sat scenario

* adding skip_misclassified to scenarios where it shouldnt be used

* add skip_misclassified to video

* minor refactor of audio_classification plus adding in skip_misclassified

* docs for skip-misclassified

* Filter by class (#1019)

* added object detection metrics

* adding ability to filter by class; also modified error messages for filter_by_index()

* flake8

* fix new test

* updated docs

* update warning logic for filter_by_index

* add warning if filtering by class and using train split

* fix filter by class dset test

* Micronnet model using pytorch sequential api (#1023)

* Sequential API Pytorch version of MicronNet

* Fix pytest

* unify sysconfig and command line directives (#1027)

* merge sysconfig and command arguments

* unit test for config merge

arguments.py created because __main__  is not pytest importable
(or at least not easily)

additional args like filepath get added to sysconfig, I don't know
if that's a bad thing

* code correct but the test was wrong

* flake8 compliance

* args merge documented

* collapse loops to comprehensions

* remove obsoleted truth table comment

* add cross-reference to command_line.md

* DAPRICOT integration (#1021)

* initial draft of DAPRICOT dataset

* ran black and flake8'

* updating checksums for now

* add dapricot dataset fn

* refactor preprocessing + setting cache=False temporarily

* add WIP dapricot scenario and config

* update scenario

* add label preprocessing to unify label format with other OD datasets

* added attack skeleton and minor scenario updates

* testing insertion of random patch

* changes necessary for upgrade to ART 1.6

* update config for attack rename

* typo

* use robust dpatch to generate attack

* update dapricot config

* black formatting

* format json

* flake8

* add masked pgd attack for dapricot

* formatting

* adding dapricot utils script

* fix channel order

* return batch in (3, H, W, C) shape

* fixing case when x_key is a tuple

* update model to be compatible with ART 1.6

* dont slice channel dim in reverse

* updated dapricot_dev with access to all three cameras

* minor update

* ran black, flake8

* update dapricot version and add new cached checksum file

* use cached by default

* add metric fn for dapricot

* parse patch shape

* fixes bug that arises bc there are no non-targeted adversarial metrics

* make scenario code more specific to dapricot threat model

* move config checks to before model loading

* update configs with label targeters

* add physical threat model

* removing some patch insertion functionality

* remove unused variable

* adding cv2 and necessary dependencies

* add skip-misclassified to dapricot scenario

* removing outdated dockerfile that isnt used anymore

* updating dockerfiles with opencv

* add DEBIAN_FRONTEND=noninteractive to dockerfiles

* docs and minor modification of attack config params

* no longer need to pin to ART dev branch

* adding dapricot test

* tweak to label preprocessing

* typo in sysconfig

* enforce batch_size 1 earlier; fix dapricot config

* fix typo

* json formatting

* reset patch_location and patch_shape per example

* add opencv to host_requirements.txt

* keep everything (3, H, W, 3)

* add comments to host_requirements.txt re cv2

Co-authored-by: lucas.cadalzo <lucas.cadalzo@twosixlabs.com>

* Audio echo (#1030)

* added logic

* audio channel

* audio channel

* audio fixes

* Dev merge (#1032)

* Bump pillow from 7.1.2 to 8.1.1 (#1024)

Bumps [pillow](https://github.com/python-pillow/Pillow) from 7.1.2 to 8.1.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/master/CHANGES.rst)
- [Commits](python-pillow/Pillow@7.1.2...8.1.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump tensorflow-gpu from 1.15.0 to 2.4.0 (#1025)

* Bump tensorflow-gpu from 1.15.0 to 2.4.0

Bumps [tensorflow-gpu](https://github.com/tensorflow/tensorflow) from 1.15.0 to 2.4.0.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v1.15.0...v2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>

* Update host-requirements.txt

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: davidslater <david.slater@twosixlabs.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: davidslater <david.slater@twosixlabs.com>

Co-authored-by: ng390 <neal.gupta@twosixlabs.com>
Co-authored-by: lcadalzo <39925313+lcadalzo@users.noreply.github.com>
Co-authored-by: kevinmerchant <67436031+kevinmerchant@users.noreply.github.com>
Co-authored-by: matt wartell <matt.wartell@twosixlabs.com>
Co-authored-by: yusong-tan <59029053+yusong-tan@users.noreply.github.com>
Co-authored-by: lucas.cadalzo <lucas.cadalzo@twosixlabs.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant