Releases: albumentations-team/albumentations
Albumentations 1.4.11 Release Notes
- Support our work
- Transforms
- Core functionality
- Deprecations
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Transforms
Added OverlayElements transform
Allows to paste set of images + corresponding masks to the image.
It is not entirely CopyAndPaste
as "masks", "bounding boxes" and "keypoints" are not supported, but step in that direction.
Affine
Added balanced sampling for scale_limit
From FAQ:
The default scaling logic in RandomScale
, ShiftScaleRotate
, and Affine
transformations is biased towards upscaling.
For example, if scale_limit = (0.5, 2)
, a user might expect that the image will be scaled down in half of the cases and scaled up in the other half. However, in reality, the image will be scaled up in 75% of the cases and scaled down in only 25% of the cases. This is because the default behavior samples uniformly from the interval [0.5, 2]
, and the interval [0.5, 1]
is three times smaller than [1, 2]
.
To achieve balanced scaling, you can use Affine with balanced_scale=True, which ensures that the probability of scaling up and scaling down is equal.
balanced_scale_transform = A.Compose([A.Affine(scale=(0.5, 2), balanced_scale=True)])
by @ternaus
RandomSizedBBoxSafeCrop
Added support for keypoints
by @ternaus
BBoxSafeRandomCrop
Added support for keypoints
by @ternaus
RandomToneCurve
- Now can sample noise per channel
- Works with any number of channels
- Now works not just with uint8, but with float32 images as well
by @zakajd
ISONoise
- BugFix
- Now works not just with uint8, but with float32 images as well
by @ternaus
Core
Added strict
parameter to Compose
If strict=True
only targets that are expected could be passed.
If strict = False
, user can pass data with extra keys. Such data would not be affected by transforms.
Request came from users that use pipelines in the form:
transform = A.Compose([....])
data = A.Compose(**data)
by @ayasyrev
Refactoring
Crop module was heavily refactored, all tests and checks pass, but we will see.
Deprecations
Grid Dropout
Old way:
GridDropout(
holes_number_x=XXX,
holes_numver_y=YYY,
unit_size_min=ZZZ,
unit_size_max=PPP
)
New way:
GridDropout(
holes_number_xy = (XXX, YYY),
unit_size_range = (ZZZ, PPP)
)
by @ternaus
RandomSunFlare
Old way:
RandomSunFlare(
num_flare_circles_lower = XXX,
num_flare_circles_upper = YYY
)
new way:
RandomSunFlare(num_flare_circles_range = (XXX, YYY))
Bugfixes
- Bugfix in
ISONoise
, as it returned zeros. by @ternaus - BugFix in
Affine
as during rotation image, mask, keypoints have one center point for rotation and bounding box another => we need to create two separate affine matrices. by @ternaus - Small fix in Error Message by @philipp-fischer
- Bugfix that affected many transforms, where users specified probability as number and not as
p=number
. Say forVerticalFlip(0.5)
you could expect 50% chance, but 0.5 was attributed not top
but toalways_apply
which meant that the transform was always applied. by @ayasyrev
Hotfix release with fixes for RandomGauss
Hotfix release that addresses issues introduced in 1.4.9
There were two issues in GaussNoise that this release addresses:
- Default value of 0.5 for
noise_scale_factor
, which is different from the behavior before version 1.4.9. Now default value = 1, which means random noise is created for every point independently - Noise was truncated before adding to the image, so that
gauss >=0
. Fixed.
Albumentations 1.4.9 Release Notes
- Support our work
- New transforms
- Integrations
- Speedups
- Deprecations
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Transforms
PlanckianJitter
New transform, based on
Statements from the paper on why PlanckianJitter is superior to ColorJitter:
-
Realistic Color Variations: PlanckianJitter applies physically realistic illuminant variations based on Planck’s Law for black-body radiation. This leads to more natural and realistic variations in chromaticity compared to the arbitrary changes in hue, saturation, brightness, and contrast applied by ColorJitter.
-
Improved Representation for Color-Sensitive Tasks: The transformations in PlanckianJitter maintain the ability to discriminate image content based on color information, making it particularly beneficial for tasks where color is a crucial feature, such as classifying natural objects like birds or flowers. ColorJitter, on the other hand, can significantly alter colors, potentially degrading the quality of learned color features.
-
Robustness to Illumination Changes: PlanckianJitter produces models that are robust to illumination changes commonly observed in real-world images. This robustness is advantageous for applications where lighting conditions can vary widely.
-
Enhanced Color Sensitivity: Models trained with PlanckianJitter show a higher number of color-sensitive neurons, indicating that these models retain more color information compared to those trained with ColorJitter, which tends to induce color invariance.
by @zakajd
GaussNoise
Added option to approximate GaussNoise.
Generation of random Noise for large images is slow.
Added scaling factor for noise generation. Value should be in the range (0, 1]
. When set to 1, noise is sampled for each pixel independently. If less, noise is sampled for a smaller size and resized to fit the shape of the image. Smaller values make the transform much faster. Default: 0.5
Integrations
Added integration wit HFHub. Now you can load and save augmentation pipeline to HuggingFace and reuse it in the future or share with others.
import albumentations as A
import numpy as np
transform = A.Compose([
A.RandomCrop(256, 256),
A.HorizontalFlip(),
A.RandomBrightnessContrast(),
A.RGBShift(),
A.Normalize(),
])
evaluation_transform = A.Compose([
A.PadIfNeeded(256, 256),
A.Normalize(),
])
transform.save_pretrained("qubvel-hf/albu", key="train")
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"
transform.save_pretrained("qubvel-hf/albu", key="train", push_to_hub=True)
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"
# + push the transform to the Hub to the repository "qubvel-hf/albu"
transform.push_to_hub("qubvel-hf/albu", key="train")
# ^ this will push the transform to the Hub to the repository "qubvel-hf/albu" (without saving it locally)
loaded_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="train")
# ^ this will load the transform from local folder if exist or from the Hub repository "qubvel-hf/albu"
evaluation_transform.save_pretrained("qubvel-hf/albu", key="eval", push_to_hub=True)
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_eval.json"
loaded_evaluation_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="eval")
# ^ this will load the transform from the Hub repository "qubvel-hf/albu"
by @qubvel
Speedups
These transforms should be faster for all types of images. But measured only for three channel uint8
- RGBShift: 2X (+106%)
- GaussNoise: 3.3X (+ 236%)
Deprecations
Deprecated always_apply
For years we had two parameters in constructors - probability
and always_apply
. The interplay between them is not always obvious and intuitively always_apply=True
should be equivalent to p=1
.
always_apply
is deprecated now. always_apply=True
still works, but it will be deprecated in the future. Use p=1
instead
by @ayasyrev
RandomFog
Updated interface for RandomFog
Old way:
RandomFog(fog_coef_lower=0.3, fog_coef_upper=1)
New way:
RandomFog(fog_coef_range=(0.3, 1))
by @ternaus
Improvements and bugfixes
Disable check for updates
When one imports Albumentations library, there is a check that it is the latest version installed.
To disable this check you can set up environmental variable: NO_ALBUMENTATIONS_UPDATE
to 1
by @lerignoux
Fix for deprecation warnings
For a set of transforms we were throwing deprecation warnings, even when modern version of the interface was used. Fixed. by @ternaus
Albucore
We moved low level operations like add, multiply, normalize, etc to a separate library: https://github.com/albumentations-team/albucore
There are numerous ways to perform such operations in opencv and numpy. And there is no clear winner. Results depend on image type.
Separate library gives us confidence that we picked the fastest version that works on any image type.
by @ternaus
Bugfixes
Various bugfixes by @ayasyrev @immortalCO
Albumentations 1.4.8 Release Notes
- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Documentation
Added to the documentation links to the UI on HuggingFace to explore hyperparameters visually.
Deprecations
RandomSnow
Updated interface:
Old way:
transform = A.Compose([A.RandomSnow(
snow_point_lower=0.1,
snow_point_upper=0.3,
p=0.5
)])
New way:
transform = A.Compose([A.RandomSnow(
snow_point_range=(0.1, 0.3),
p=0.5
)])
RandomRain
Old way
transform = A.Compose([A.RandomSnow(
slant_lower=-10,
slant_upper=10,
p=0.5
)])
New way:
transform = A.Compose([A.RandomRain(
slant_range=(-10, 10),
p=0.5
)])
Improvements
Created library with core functions albucore. Moved a few helper functions there.
We need this library to be sure that transforms are:
- At least as fast as
numpy
andopencv
. For some functions it is possible to be faster than both of them. - Easier to debug.
- Could be used in other projects, not related to Albumentations.
Bugfixes
- Bugfix in
check_for_updates
. Now the pipeline does not throw an error regardless of why we cannot check for update. - Bugfix in
RandomShadow
. Does not create unexpected purple color on bright white regions with shadow overlay anymore. - BugFix in
Compose
. NowCompose([])
does not throw an error, but just works asNoOp
by @ayasyrev - Bugfix in
min_max
normalization. Now return 0 and not NaN on constant images. by @ternaus - Bugfix in
CropAndPad
. Now we can sample pad/crop values for all sides with interface like((-0.1, -0.2), (-0.2, -0.3), (0.3, 0.4), (0.4, 0.5))
by @christian-steinmeyer - Small refactoring to decrease tech debt by @ternaus and @ayasyrev
Albumentations 1.4.7 Release Notes
- Support our work
- Documentation
- Deprecations
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Documentation
- Added to the website tutorial on how to use Albumentations with Hugginigface for object Detection. Based on the tutorial by @qubvel
Deprecations
ImageCompression
Old way:
transform = A.Compose([A.ImageCompression(
quality_lower=75,
quality_upper=100,
p=0.5
)])
New way:
transform = A.Compose([A.ImageCompression(
quality_range=(75, 100),
p=0.5
)])
Downscale
Old way:
transform = A.Compose([A.Downscale(
scale_min=0.25,
scale_max=1,
interpolation= {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])
New way:
transform = A.Compose([A.Downscale(
scale_range=(0.25, 1),
interpolation_pair = {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
p=0.5
)])
As of now both ways work and will provide the same result, but old functionality will be removed in later releases.
by @ternaus
Improvements
- Buggix in
Blur
. - Bugfix in
bbox clipping
, it could be not intuitive, but boxes should be clipped byheight, width
and notheight - 1, width -1
by @ternaus - Allow to compose only keys, that are required there. Any extra unnecessary key will give an error by @ayasyrev
- In
PadIfNeeded
if value parameter is not None, but border mode is reflection, border mode is changed tocv2.BORDER_CONSTANT
by @ternaus
Albumentations 1.4.6 Release Notes
This is out of schedule release with a bugfix that was introduced in version 1.4.5
In version 1.4.5 there was a bug that went unnoticed - if you used pipeline that consisted only of ImageOnly
transforms but pass bounding boxes into it, you would get an error.
If you had in such pipeline at least one non ImageOnly
transform, say HorizontalFlip
or Crop
, everything would work as expected.
We fixed the issue and added tests to be sure that it will not happen in the future.
Albumentations 1.4.5 Release Notes
- Support our work
- Highlights
- Deprecations
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Highlights
Bbox clipping
Before version 1.4.5 it was assumed that bounding boxes that are fed into the augmentation pipeline should not extend outside of the image.
Now we added an option to clip boxes to the image size before augmenting them. This makes pipeline more robust to inaccurate labeling
Example:
Will fail if boxes extend outside of the image:
transform = A.Compose([
A.HorizontalFlip(p=0.5)
], bbox_params=A.BboxParams(format='coco'))
Clipping bounding boxes to the image size:
transform = A.Compose([
A.HorizontalFlip(p=0.5)
], bbox_params=A.BboxParams(format='coco', clip=True))
by @ternaus
SelectiveChannelTransform
Added SelectiveChannelTransform that allows to apply transforms to a selected number of channels.
For example it could be helpful when working with multispectral images, when RGB is a subset of the overall multispectral stack which is common when working with satellite imagery.
Example:
aug = A.Compose(
[A.HorizontalFlip(p=0.5),
A.SelectiveChannelTransform(transforms=[A.ColorJItter(p=0.5),
A.ChromaticAberration(p=0.5))], channels=[1, 2, 18], p=1)],
)
Here HorizontalFlip applied to the whole multispectral image, but pipeline of ColorJitter
and ChromaticAberration
only to channels [1, 2, 18]
by @ternaus
Deprecations
CoarseDropout
Old way:
transform = A.Compose([A.CoarseDropout(
min_holes = 5,
max_holes = 8,
min_width = 3,
max_width = 12,
min_height = 4,
max_height = 5
)])
New way:
transform = A.Compose([A.CoarseDropout(
num_holes_range=(5, 8),
hole_width_range=(3, 12),
hole_height_range=(4, 5)
)])
As of now both ways work and will provide the same result, but old functionality will be removed in later releases.
Improvements and bug fixes
- Number of fixes and speedups in the core of the library
Compose
andBasicTransform
by @ayasyrev - Extended
Contributor's guide
by @ternaus - Can use
random
forfill_value
inCoarseDropout
by @ternaus - Fix in ToGray docstring by @wilderrodrigues
- BufFix in D4 - now works not only with square, but with rectangular images as well. By @ternaus
- BugFix in RandomCropFromBorders by @ternaus
Albumentations 1.4.4 Release Notes
Albumentations 1.4.4 Release Notes
- Support our work
- Highlights
- Transforms
- Improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
Transforms
Added D4 transform
Applies one of the eight possible D4 dihedral group transformations to a square-shaped input, maintaining the square shape. These transformations correspond to the symmetries of a square, including rotations and reflections by @ternaus
The D4 group transformations include:
- e
(identity): No transformation is applied.
- r90
(rotation by 90 degrees counterclockwise)
- r180
(rotation by 180 degrees)
- r270
(rotation by 270 degrees counterclockwise)
- v
(reflection across the vertical midline)
- hvt
(reflection across the anti-diagonal)
- h
(reflection across the horizontal midline)
- t
(reflection across the main diagonal)
Could be applied to:
- image
- mask
- bounding boxes
- key points
Does not generate interpolation artifacts as there is no interpolation.
Provides the most value in tasks where data is invariant to rotations and reflections like:
- Top view drone and satellite imagery
- Medical images
Example:
Added new normalizations to Normalize transform
standard
-subtract
fixed mean, divide by fixedstd
image
- the same asstandard
, butmean
andstd
computed for each image independently.image_per_channel
- the same as before, but per channelmin_max
- subtractmin(image)
and divide bymax(image) - min(image)
min_max_per_channel
- the same, but per channel
by @ternaus
Changes in the interface of RandomShadow
New, preferred wat is to use num_shadows_limit
instead of num_shadows_lower
/ num_shadows_upper
by @ayasyrev
Improvements and bug fixes
Added check for input parameters to transforms with Pydantic
Now all input parameters are validated and prepared with Pydantic. This will prevent bugs, when transforms are initialized without errors with parameters that are outside of allowed ranges.
by @ternaus
Updates in RandomGridShuffle
- Bugfix by @ayasyrev
- Transform updated to work even if side is not divisible by the number of tiles. by @ternaus
New way to add additional targets
Standard way uses additional_targets
transform = A.Compose(
transforms=[A.Rotate(limit=(90.0, 90.0), p=1.0)],
keypoint_params=A.KeypointParams(
angle_in_degrees=True,
check_each_transform=True,
format="xyas",
label_fields=None,
remove_invisible=False,
),
additional_targets={"keypoints2": "keypoints"},
)
Now you can also add them using add_targets
:
transform = A.Compose(
transforms=[A.Rotate(limit=(90.0, 90.0), p=1.0)],
keypoint_params=A.KeypointParams(
angle_in_degrees=True,
check_each_transform=True,
format="xyas",
label_fields=None,
remove_invisible=False,
),
)
transform.add_targets({"keypoints2": "keypoints"})
by @ayasyrev
Small fixes
- Small speedup in the code for transforms that use
add_weighted
function by @gogetron - Fix in error message in Affine transform by @matsumotosan
- Bugfix in Sequential by @ayasyrev
Documentation
- Updated Contributor's guide. by @ternaus
- Added example notebook on how to apply D4 to images, masks, bounding boxes and key points. by @ternaus
- Added example notebook on how to apply RandomGridShuffle to images, masks and keypoints. by @ternaus
Albumentations 1.4.3 Release Notes
Albumentations 1.4.3 Release Notes
- Request
- Highlights
- New transform
- Minor improvements and bug fixes
Support Our Work
- Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
- Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
- Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations
New transform
- Added
Morphological
transform that modifies the structure of the image. Dilation expands the white (foreground) regions in a binary or grayscale image, while erosion shrinks them.
Minor improvements and bug fixes
- Updated benchmark for uint8 images, processed on CPU. Added Kornia and Augly. LINK by @ternaus
- Bugfix in FDA transform by @ternaus
- Now RandomSizedCrop supports the same signature as analogous transform in torchvision by @zetyquickly
1.4.2
Albumentations 1.4.2 Release Notes
- Request
- Highlights
- New transform
- New functionality
- Improvements and bug fixes
Request
- If you enjoy using the library as an individual developer or as a representative of the company please consider becoming a sponsor for the library. Every dollar helps.
- If you did not give our repo a ⭐, it is only one mouse click
- If you have feature requests or proposals or encounter issues - submit your request to issues or ask in Discord server for Albumentations
New transform
Left: Original, Middle: Chromatic aberration (default args, mode="green_purple"), Right: Chromatic aberration (default args, mode="red_blue")
(Image is from our internal mobile mapping dataset)
New functionality
- Return
mixing parameter
forMixUp
transform by @Dipet. For more details Tutorial on MixUp
Improvements and Bugfixes
- Do not throw deprecation warning when people do not use deprecated parameters in
AdvancedBlur
by @Aloqeely - Updated
CONTRIBUTORS.md
for Windows users by @Aloqeely - Fixed Docstring for
DownScale
transform by @ryoryon66 - Bugfix in
PadIfNeeded
serialization @ternaus