Releases: nanoporetech/bonito
v0.8.1
- e33a860 Attention Is All You Need!
- 454324a v5.0.0 models and example training sets for DNA & RNA.
dna_r10.4.1_e8.2_400bps_sup@v5.0.0
dna_r10.4.1_e8.2_400bps_hac@v5.0.0
dna_r10.4.1_e8.2_400bps_fast@v5.0.0
rna004_130bps_sup@v5.0.0
rna004_130bps_hac@v5.0.0
rna004_130bps_fast@v5.0.0
example_data_dna_r10.4.1_v0
example_data_rna004_v0
- ed36968 new model configs.
- 37a0557 default alignment preset is now
lr:hq
. - d4f6dd2 unpin bonito requirements.
- 40a9753 fast5 deprecation warning.
- 0ba190f fixed progress count when setting
--max-reads
. - 6c8ecb5 batchnorm fusing for inference.
- 302b1ce
bonito view
now accepts a model directory or a config. - a170b7c default scaling fixes.
v0.7.3
v0.7.2
v0.7.1
Highlights
- 9113e24 v4.2.0 5kHz simplex models.
dna_r10.4.1_e8.2_400bps_fast@v4.2.0
dna_r10.4.1_e8.2_400bps_hac@v4.2.0
dna_r10.4.1_e8.2_400bps_sup@v4.2.0
- 8c96eb8 make
sample_id
optional forfast5
input. - 3b4bcad ensure decoder runs on same device as nn model.
- 8fe1f61 fix training data downloading.
- 26d52d9 set default
--valid-chunks
toNone
. - ebc32a0 fix models as list.
Thanks @chAwater for his collection of bug fixes in this release.
Installation
$ pip install ont-bonito
Note: For anything other than basecaller training or method development please use dorado.
v0.7.0
Highlights
- 66ee29a v4.1.0 simplex models.
dna_r10.4.1_e8.2_260bps_fast@v4.1.0
dna_r10.4.1_e8.2_260bps_hac@v4.1.0
dna_r10.4.1_e8.2_260bps_sup@v4.1.0
dna_r10.4.1_e8.2_400bps_fast@v4.1.0
dna_r10.4.1_e8.2_400bps_hac@v4.1.0
dna_r10.4.1_e8.2_400bps_sup@v4.1.0
- 4cf3c6f torch 2.0 + updated requirements.
- 3bc338a fix use of TLEN.
- 21df7d5 v4.0.0 simplex models.
dna_r10.4.1_e8.2_260bps_fast@v4.0.0
dna_r10.4.1_e8.2_260bps_hac@v4.0.0
dna_r10.4.1_e8.2_260bps_sup@v4.0.0
dna_r10.4.1_e8.2_400bps_fast@v4.0.0
dna_r10.4.1_e8.2_400bps_hac@v4.0.0
dna_r10.4.1_e8.2_400bps_sup@v4.0.0
Installation
Torch 2.0 (from pypi.org) is now built using CUDA 11.7 so the default installation of ont-bonito
can be used for Turing/Ampere GPUs.
$ pip install ont-bonito
Note: For anything other than basecaller training or method development please use dorado.
v0.6.2
v0.6.1
v0.6.0
Highlights
- f2a3a8e improved quantile based signal scaling algorithm.
- 552a5ce significant improvement in short read calling vs previous bonito versions.
- 5ceb6e1 qscore filtering.
- cfb7e06 new R10.4.1 E8.2 models (v3.5.2) for both 260bps and 400bps conditions.
dna_r10.4.1_e8.2_260bps_fast@v3.5.2
dna_r10.4.1_e8.2_260bps_hac@v3.5.2
dna_r10.4.1_e8.2_260bps_sup@v3.5.2
dna_r10.4.1_e8.2_400bps_fast@v3.5.2
dna_r10.4.1_e8.2_400bps_hac@v3.5.2
dna_r10.4.1_e8.2_400bps_sup@v3.5.2
Bugfixes
- fa56de1 skip over any fast5 files that cause runtime errors.
- f0827d9 use stderr for all model download output to avoid issues with sequence output formats.
- 3c8294b upgraded koi with py3.7 support.
Misc
- Python 3.10 supported added.
- Read tags added for signal scaling midpoint, dispersion and version.
- 9f7614d support for exporting models to dorado.
- 90b6d19 add estimated total time to basecaller progress
- 8ba78ed export for guppy binary weights
Installation
$ pip install ont-bonito
By default pip will install torch which is build against CUDA 10.2. For CUDA 11.3 builds run:
$ pip install --extra-index-url https://download.pytorch.org/whl/cu113 ont-bonito
Note: packaging has been reworked and the ont-bonito-cuda111
and ont-bonito-cuda113
packages are now retired. The CUDA version of torch is handled exclusively with the use of pip install --extra-index-url
now.
v0.5.3
Highlights
- c0f83f3 POD5 support - https://github.com/nanoporetech/pod5-file-format
- 27f8468 Kit 14 models
dna_r10.4.1_e8.2_{fast, hac, sup}@v3.5.1
added. - e6282a2 Upgrade to remora 1.1 (kit 14 mod bases).
Bugfixes
- 4585b74 fix for handling stitching of short reads (read < chunksize).
- 9a4f98a fix for overly confidant qscores in repeat regions.
- 3187198 scaling protection for short reads.
Misc
- d57a658 training validation times improved.
Installation
The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -
$ pip install ont-bonito
For newer GPUs (Turing, Ampere) please use -
$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113
v0.5.1
Highlights
- There is no longer a requirement for a CUDA toolkit on the target system, which significantly improves the ease of installation1.
- BAM spec 0.0.2 (+move table, numbers of samples, trimming information).
Features
Bugfixes
- c8417b7 handle datatimes with subsecond resolution.
- 6f23467 fix the mappy preset.
- 737d9a2 better management of mappy's memory usage.
- 2bbd711 remora 0.1.2 - fixes bonito/remora hanging #216.
- 6e91a9d sensible minimum scaling factor - fixes #209.
Misc
- Upgrade to the latest Mappy.
- Python3.6 support dropped (EOL).
Installation
The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -
$ pip install ont-bonito
For newer GPUs (Turing, Ampere) please use -
$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113