Skip to content

Releases: nanoporetech/bonito

v0.8.1

21 May 17:29
Compare
Choose a tag to compare
  • e33a860 Attention Is All You Need!
  • 454324a v5.0.0 models and example training sets for DNA & RNA.
    • dna_r10.4.1_e8.2_400bps_sup@v5.0.0
    • dna_r10.4.1_e8.2_400bps_hac@v5.0.0
    • dna_r10.4.1_e8.2_400bps_fast@v5.0.0
    • rna004_130bps_sup@v5.0.0
    • rna004_130bps_hac@v5.0.0
    • rna004_130bps_fast@v5.0.0
    • example_data_dna_r10.4.1_v0
    • example_data_rna004_v0
  • ed36968 new model configs.
  • 37a0557 default alignment preset is now lr:hq.
  • d4f6dd2 unpin bonito requirements.
  • 40a9753 fast5 deprecation warning.
  • 0ba190f fixed progress count when setting --max-reads.
  • 6c8ecb5 batchnorm fusing for inference.
  • 302b1ce bonito view now accepts a model directory or a config.
  • a170b7c default scaling fixes.

v0.7.3

12 Dec 17:21
Compare
Choose a tag to compare

v0.7.2

31 Jul 15:16
Compare
Choose a tag to compare

v0.7.1

01 Jun 13:23
Compare
Choose a tag to compare

Highlights

  • 9113e24 v4.2.0 5kHz simplex models.
    • dna_r10.4.1_e8.2_400bps_fast@v4.2.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.2.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.2.0
  • 8c96eb8 make sample_id optional for fast5 input.
  • 3b4bcad ensure decoder runs on same device as nn model.
  • 8fe1f61 fix training data downloading.
  • 26d52d9 set default --valid-chunks to None.
  • ebc32a0 fix models as list.

Thanks @chAwater for his collection of bug fixes in this release.

Installation

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.

v0.7.0

03 Apr 13:08
Compare
Choose a tag to compare

Highlights

  • 66ee29a v4.1.0 simplex models.
    • dna_r10.4.1_e8.2_260bps_fast@v4.1.0
    • dna_r10.4.1_e8.2_260bps_hac@v4.1.0
    • dna_r10.4.1_e8.2_260bps_sup@v4.1.0
    • dna_r10.4.1_e8.2_400bps_fast@v4.1.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.1.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.1.0
  • 4cf3c6f torch 2.0 + updated requirements.
  • 3bc338a fix use of TLEN.
  • 21df7d5 v4.0.0 simplex models.
    • dna_r10.4.1_e8.2_260bps_fast@v4.0.0
    • dna_r10.4.1_e8.2_260bps_hac@v4.0.0
    • dna_r10.4.1_e8.2_260bps_sup@v4.0.0
    • dna_r10.4.1_e8.2_400bps_fast@v4.0.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.0.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.0.0

Installation

Torch 2.0 (from pypi.org) is now built using CUDA 11.7 so the default installation of ont-bonito can be used for Turing/Ampere GPUs.

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.

v0.6.2

13 Nov 23:28
Compare
Choose a tag to compare

da7fe39 upgrade to pod5 0.0.41.
c45905c add milliseconds to start_time + convert to UTC.
199a3f0 Adds duration as du tag to BAM output.
717f414 fix bug in fast5 read id subset pre-processing.

v0.6.1

12 Sep 15:45
Compare
Choose a tag to compare

Bugfixes

v0.6.0

05 Sep 13:47
Compare
Choose a tag to compare

Highlights

  • f2a3a8e improved quantile based signal scaling algorithm.
  • 552a5ce significant improvement in short read calling vs previous bonito versions.
  • 5ceb6e1 qscore filtering.
  • cfb7e06 new R10.4.1 E8.2 models (v3.5.2) for both 260bps and 400bps conditions.
    • dna_r10.4.1_e8.2_260bps_fast@v3.5.2
    • dna_r10.4.1_e8.2_260bps_hac@v3.5.2
    • dna_r10.4.1_e8.2_260bps_sup@v3.5.2
    • dna_r10.4.1_e8.2_400bps_fast@v3.5.2
    • dna_r10.4.1_e8.2_400bps_hac@v3.5.2
    • dna_r10.4.1_e8.2_400bps_sup@v3.5.2

Bugfixes

  • fa56de1 skip over any fast5 files that cause runtime errors.
  • f0827d9 use stderr for all model download output to avoid issues with sequence output formats.
  • 3c8294b upgraded koi with py3.7 support.

Misc

  • Python 3.10 supported added.
  • Read tags added for signal scaling midpoint, dispersion and version.
  • 9f7614d support for exporting models to dorado.
  • 90b6d19 add estimated total time to basecaller progress
  • 8ba78ed export for guppy binary weights

Installation

$ pip install ont-bonito

By default pip will install torch which is build against CUDA 10.2. For CUDA 11.3 builds run:

$ pip install --extra-index-url https://download.pytorch.org/whl/cu113 ont-bonito

Note: packaging has been reworked and the ont-bonito-cuda111 and ont-bonito-cuda113 packages are now retired. The CUDA version of torch is handled exclusively with the use of pip install --extra-index-url now.

v0.5.3

19 May 16:38
Compare
Choose a tag to compare

Highlights

Bugfixes

  • 4585b74 fix for handling stitching of short reads (read < chunksize).
  • 9a4f98a fix for overly confidant qscores in repeat regions.
  • 3187198 scaling protection for short reads.

Misc

  • d57a658 training validation times improved.

Installation

The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -

$ pip install ont-bonito

For newer GPUs (Turing, Ampere) please use -

$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113

v0.5.1

11 Feb 16:04
Compare
Choose a tag to compare

Highlights

  • There is no longer a requirement for a CUDA toolkit on the target system, which significantly improves the ease of installation1.
  • BAM spec 0.0.2 (+move table, numbers of samples, trimming information).

Features

  • 241e622 record the move table into the SAM/BAM.
  • a6a3ed2 ont-koi replaces seqdist + cupy1.

Bugfixes

  • c8417b7 handle datatimes with subsecond resolution.
  • 6f23467 fix the mappy preset.
  • 737d9a2 better management of mappy's memory usage.
  • 2bbd711 remora 0.1.2 - fixes bonito/remora hanging #216.
  • 6e91a9d sensible minimum scaling factor - fixes #209.

Misc

  • Upgrade to the latest Mappy.
  • Python3.6 support dropped (EOL).

Installation

The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -

$ pip install ont-bonito

For newer GPUs (Turing, Ampere) please use -

$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113