Skip to content
This repository has been archived by the owner on Mar 20, 2023. It is now read-only.

Commit

Permalink
Tag for 3.9.1 release
Browse files Browse the repository at this point in the history
  • Loading branch information
alfpark committed Dec 13, 2019
1 parent 85985d6 commit d6da749
Show file tree
Hide file tree
Showing 3 changed files with 31 additions and 3 deletions.
22 changes: 21 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,27 @@

## [Unreleased]

## [3.9.1] - 2019-12-13
### Added
- Support `--no-wait` on pool creation to allow the command to skip
waiting for the pool to become idle
- Allow ability to ignore GPU warnings, please see pool configuration
- Ubuntu 18.04 SR-IOV IB/RDMA Packer script

### Changed
- **Breaking Change:** improved per-job autoscratch setup. As part of this
change the `auto_scratch` property in the jobs configuration has changed.
- Provide ability to setup via dependency or blocking behavior
- Allow specifying the number of VMs to span
- Allow specifying the per-job autoscratch task id
- Allow multiple multi-instance tasks per job in non-`native` mode
- Update GlusterFS version to 7

### Fixed
- Fix merge task regression with enhanced autogenerated task id support
- Fix job schedule submission regression
([#329](https://github.com/Azure/batch-shipyard/issues/329))
- Fix per-job autoscratch provisioning due to upstream dependency changes

## [3.9.0] - 2019-11-15 (SC19 Edition)
### Added
Expand Down Expand Up @@ -1707,7 +1726,8 @@ transfer is disabled
#### Added
- Initial release

[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.9.0...HEAD
[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.9.1...HEAD
[3.9.1]: https://github.com/Azure/batch-shipyard/compare/3.9.0...3.9.1
[3.9.0]: https://github.com/Azure/batch-shipyard/compare/3.8.2...3.9.0
[3.8.2]: https://github.com/Azure/batch-shipyard/compare/3.8.1...3.8.2
[3.8.1]: https://github.com/Azure/batch-shipyard/compare/3.8.0...3.8.1
Expand Down
2 changes: 1 addition & 1 deletion convoy/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

__version__ = '3.9.0'
__version__ = '3.9.1'
10 changes: 9 additions & 1 deletion docs/63-batch-shipyard-custom-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ It is **strongly recommended** to use Shared Image Gallery resources instead
of directly using an Azure Managed Image for increased reliability, robustness
and performance of scale out (i.e., pool allocation with target node counts
and resize up) operations with Azure Batch pools. These improvements hold even
for Shared Image Gallery resource with a replica count of 1.
for a Shared Image Gallery resource with a replica count of 1.

This guide will focus on creating Shared Image Gallery resources for use with
Azure Batch and Batch Shipyard.
Expand Down Expand Up @@ -263,12 +263,20 @@ and the required user-land software for Infiniband installed. It is best to
base a custom image off of the existing Azure platform images that support
Infiniband/RDMA.

#### MPI Libraries
If you are utilizing MPI, the associated runtime(s) must be installed such
that they are invocable by the calling programs.

#### Storage Cluster Auto-Linking and Mounting
If mounting a storage cluster, the required NFSv4 or GlusterFS client tooling
must be installed and invocable such that the auto-link mount functionality
is operable. Both clients need not be installed unless you are mounting
both types of storage clusters.

#### Per-Job Autoscratch
If utilizing the per-job autoscratch feature, then BeeGFS Beeond must be
installed so that a shared file system can be created.

#### GlusterFS On Compute
If a GlusterFS on compute shared data volume is required, then GlusterFS
server and client tooling must be installed and invocable so the shared
Expand Down

0 comments on commit d6da749

Please sign in to comment.