bioimageloader.transforms#
Custom transforms for bioimages based on albumentations
- class bioimageloader.transforms.HWCToCHW(always_apply: bool = False, p: float = 1.0)[source]#
Transpose axes from (H, W, C) to (C, H, W)
By default,
bioimageloader
returns images in shape of (H, W, C=3) regardless of its color mode for easy handling. Some models expect (C, H, W) shape of images as input. It converts (H, W, C) to (C, H, W).See also
albumentations.ImageOnlyTransform
super class
Examples
>>> import albumentations as A >>> from bioimageloader import Config
>>> cfg = Config('config.yml') >>> transforms = A.Compose([ HWCToCHW(), ]) >>> datasets = cfg.load_datasets(transforms=transforms) >>> dset = datasets[0] # select only the first dataset >>> data = dset[0] # select only the first image >>> print(data['image'].shape) (3, H, W)
- class bioimageloader.transforms.SqueezeGrayImageCHW(keep_dim=True, always_apply: bool = False, p: float = 1.0)[source]#
Squeeze grayscale image from (3, H, W) to (1, H, W)|(H, W)
By default,
bioimageloader
returns images in 3 channels regardless of their color mode for easy handling. If a model requires (C=1, H, W) shape of input, first useHWCToCHW
and use this transform to convert from (3, H, W) to (1, H, W).- Parameters
- keep_dimbool, default: True
Keep channel axis to 1
- always_applybool, default: False
- pfloat, default: 1.0
Value between [0.0, 1.0]
See also
albumentations.ImageOnlyTransform
super class
bioimageloader.transforms.HWCToCHW
bioimageloader.transforms.SqueezeGrayImageHWC
Examples
>>> import albumentations as A >>> from bioimageloader import Config
>>> cfg = Config('config.yml') >>> transforms = A.Compose([ HWCToCHW(), SqueezeGrayImageCHW(), ]) >>> datasets = cfg.load_datasets(transforms=transforms) >>> dset = datasets[0] # select only the first dataset >>> data = dset[0] # select only the first image >>> print(data['image'].shape) (1, H, W)
You can set
keep_dim
False to entirely drop channel axis, as some models require. But useSqueezeGrayImageHWC
instead for that.>>> transforms = A.Compose([ HWCToCHW(), SqueezeGrayImageCHW(keep_dim=False), # drop channel axis ]) >>> datasets = cfg.load_datasets(transforms=transforms) >>> dset = datasets[0] # select only the first dataset >>> data = dset[0] # select only the first image >>> print(data['image'].shape) (H, W)
- class bioimageloader.transforms.SqueezeGrayImageHWC(keep_dim=False, always_apply: bool = False, p: float = 1.0)[source]#
Squeeze grayscale image from (H, W, 3) to (H, W)|(H, W, 1)
By default,
bioimageloader
returns images in 3 channels regardless of their color mode for easy handling. It converts (H, W, 3) to (H, W).If a model requires (H, W) shape of input, use this transform to convert from (H, W, 3) to (H, W).
- Parameters
- keep_dimbool, default: False
- always_applybool, default: False
- pfloat, default: 1.0
Value between [0.0, 1.0]
See also
albumentations.ImageOnlyTransform
super class
bioimageloader.transforms.SqueezeGrayImageHWC
Examples
>>> import albumentations as A >>> from bioimageloader import Config
>>> cfg = Config('config.yml') >>> transforms = A.Compose([ SqueezeGrayImageHWC(), ]) >>> datasets = cfg.load_datasets(transforms=transforms) >>> dset = datasets[0] # select only the first dataset >>> data = dset[0] # select only the first image >>> print(data['image'].shape) (H, W)
- class bioimageloader.transforms.ExpandToRGB(always_apply: bool = False, p: float = 1.0)[source]#
Make sure image/mask has 3 channels, either from HW or HW1 to HW3
Expand the channel axis of image array
Notes
When used with
albumentations.pytoch.ToTensorV2
, settranspose_mask=True
to transpose masks.- Targets:
image, mask
- class bioimageloader.transforms.RGBToGray(always_apply: bool = False, p: float = 1.0)[source]#
ToGray preserve all 3 channels from the input. This transform truncates channels dimension.
Warning
This will be deprecated. Grayscale conversion is done in eash Dataset
Notes
- Targets:
image
- Image types:
uint8, float32
- class bioimageloader.transforms.ToGrayBySum(always_apply: bool = False, p: float = 1.0, num_channels: Optional[int] = None)[source]#
Convert image to gray scale by tacking mean of existing channels
For 2 channels, multi-modal images, ToGray does not make sense. Normally, rgb2gray conversions is a linear sum of RGB values (opencv [1], pillow [2]). Just summing with eqaul weights would be more correct for bioimages.
Warning
This will be deprecated. Grayscale conversion is done through
grayscale
andgrayscale_mode
arguments in eash DatasetReferences
- class bioimageloader.transforms.ChannelReorder(order: Tuple[int, int, int], always_apply: bool = False, p: float = 1.0)[source]#
Reorder channel
Expect images with 3 channels. Reorder and make it continuous in āCā order.
- Parameters
- ordertuple of three integers
Reorder by indexing
- always_applybool, default: False
- pfloat, default: 1.0
Value between [0.0, 1.0]
See also
albumentations.ImageOnlyTransform
super class
albumentations.augmentations.transforms.ChannelShuffle
random shuffling
Examples
>>> import numpy as np >>> from bioimageloader.transforms import ChannelReorder
>>> arr = np.arange(12).reshape((2, 2, 3)) >>> print(arr) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]]
>>> reorder = ChannelReorder((2, 1, 0)) >>> arr_reordered = reorder.apply(arr) >>> print(arr_reordered) [[[ 2 1 0] [ 5 4 3]] [[ 8 7 6] [11 10 9]]]
- class bioimageloader.transforms.NormalizePercentile(qmin: float = 0.0, qmax: float = 99.8, per_channel: bool = False, clip: bool = False, always_apply: bool = False, p: float = 1.0)[source]#
Normalize using percentile
Compute q-th percentile min- and max-values from given image array and normalize.
Use
numpy.percentile()
[1]Expect images with 3 channels. Reorder and make it continuous in āCā order.
- Parameters
- qminfloat, default: 0.0
Lower bound quantile in range of [0, 100)
- qmaxfloat, default: 99.8
Upper bound quantile in range of (0, 100]
- per_channelbool, default: False
Whether to calculate percentile per channel or not
- clipbool, default: False
Whether to clip in [0, 1] or not. Read more in Returns section.
- ordertuple of three integers
Reorder by indexing
- always_applybool, default: False
- pfloat, default: 1.0
Value between [0.0, 1.0]
- Returns
- img_normnumpy.ndarray
Normalized image in float32 in range of [0.0, 1.0] if
clip
set to True, else its value overflows lower beyond 0.0 and higher beyond 1.0.
See also
albumentations.ImageOnlyTransform
super class
albumentations.augmentations.transforms.ChannelShuffle
random shuffling
References
Examples
>>> import numpy as np >>> from bioimageloader.transforms import ChannelReorder
>>> arr = np.arange(12).reshape((2, 2, 3)) >>> print(arr) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]]
>>> reorder = ChannelReorder((2, 1, 0)) >>> arr_reordered = reorder.apply(arr) >>> print(arr_reordered) [[[ 2 1 0] [ 5 4 3]] [[ 8 7 6] [11 10 9]]]
- class bioimageloader.transforms.BinarizeMask(always_apply: bool = False, p: float = 1.0, dtype: Optional[str] = None, val: Optional[Union[float, int]] = None)[source]#
Transform instance masks into binary masks
Note that when composed with other transforms, BinarizeMask would rather come after them, because dtype will be boolean and
albumentations
does not like it. When you setval
anddtype
compatible withalbumentations
, you can place BinarizeMask in any order safely.- Parameters
- always_applybool, default: False
- pfloat, default: 1.0
Value between [0.0, 1.0]
- dtypestr or dtype, optional
Determine dtype. Default dtype becomes float32, when
val
is set. Otherwise, it becomes boolean.- valfloat, optional
Change binarized mask value other than True if set. It also enforces dtype to be float32.
- Returns
- masknumpy.ndarray
Mask that has binary value either [False, True] or [0,
val
]
See also
albumentations.DualTransform
super class
Examples
>>> import numpy as np >>> from bioimageloader.transforms import BinarizeMask
>>> # instance mask >>> mask_inst = np.arange(12).reshape((3, 4)) >>> print(mask_inst) [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]]
>>> binarizemask = BinarizeMask() >>> mask_binary = binarizemask.apply_to_mask(mask_inst) >>> print(mask_binary) [[False True True True] [ True True True True] [ True True True True]]