decode.neuralfitter package#
Subpackages#
- decode.neuralfitter.inference package
- decode.neuralfitter.models package
- Submodules
- decode.neuralfitter.models.model_param module
- decode.neuralfitter.models.model_speced_impl module
SigmaMUNet
SigmaMUNet.apply_detection_nonlin()
SigmaMUNet.apply_nonlin()
SigmaMUNet.bg_ch_ix
SigmaMUNet.ch_out
SigmaMUNet.forward()
SigmaMUNet.mt_heads
SigmaMUNet.out_channels_heads
SigmaMUNet.p_ch_ix
SigmaMUNet.parse()
SigmaMUNet.pxyz_mu_ch_ix
SigmaMUNet.pxyz_sig_ch_ix
SigmaMUNet.sigma_eps_default
SigmaMUNet.sigmoid_ch_ix
SigmaMUNet.tanh_ch_ix
SigmaMUNet.training
SigmaMUNet.weight_init()
- decode.neuralfitter.models.unet_param module
- decode.neuralfitter.models.unet_parts module
- Module contents
- decode.neuralfitter.train package
- decode.neuralfitter.utils package
- Submodules
- decode.neuralfitter.utils.collate module
- decode.neuralfitter.utils.last_layer_dynamics module
- decode.neuralfitter.utils.log_train_val_progress module
- decode.neuralfitter.utils.logger module
DictLogger
MultiLogger
NoLog
NoLog.add_audio()
NoLog.add_custom_scalars()
NoLog.add_embedding()
NoLog.add_figure()
NoLog.add_figures()
NoLog.add_graph()
NoLog.add_histogram()
NoLog.add_hparams()
NoLog.add_image()
NoLog.add_images()
NoLog.add_mesh()
NoLog.add_pr_curve()
NoLog.add_scalar()
NoLog.add_scalar_dict()
NoLog.add_scalars()
NoLog.add_text()
NoLog.add_video()
SummaryWriter
- decode.neuralfitter.utils.padding_calc module
- decode.neuralfitter.utils.probability module
- decode.neuralfitter.utils.processing module
- Module contents
Submodules#
decode.neuralfitter.coord_transform module#
- class decode.neuralfitter.coord_transform.Offset2Coordinate(xextent, yextent, img_shape)[source]#
Bases:
object
- Parameters:
xextent (tuple) – extent in x
yextent (tuple) – extent in y
img_shape (tuple) – image shape
decode.neuralfitter.dataset module#
- class decode.neuralfitter.dataset.InferenceDataset(*, frames, frame_proc, frame_window)[source]#
Bases:
SMLMStaticDataset
- Parameters:
frames (torch.Tensor) – frames
frame_proc – frame processing function
frame_window (int) – frame window
- class decode.neuralfitter.dataset.SMLMAPrioriDataset(*, simulator, em_proc, frame_proc, bg_frame_proc, tar_gen, weight_gen, frame_window, pad, return_em=False)[source]#
Bases:
SMLMLiveDataset
- Parameters:
frames (torch.Tensor) – frames. N x H x W
em (list of EmitterSets) – ground-truth emitter-sets
frame_proc – frame processing function
em_proc – emitter processing / filter function
tar_gen – target generator function
weight_gen – weight generator function
frame_window (int) – width of frame window
return_em (bool) – return EmitterSet in getitem method.
- property emitter: EmitterSet#
Return emitter with same indexing frames are returned; i.e. when pad same is used, the emitters frame index is not changed. When pad is None, the respective frame index is corrected for the frame window.
- class decode.neuralfitter.dataset.SMLMDataset(*, em_proc, frame_proc, bg_frame_proc, tar_gen, weight_gen, frame_window, pad=None, return_em)[source]#
Bases:
Dataset
Init new dataset.
- Parameters:
em_proc – Emitter processing
frame_proc – Frame processing
bg_frame_proc – Background frame processing
tar_gen – Target generator
weight_gen – Weight generator
frame_window (
int
) – number of frames per sample / size of frame windowpad (
Optional
[str
]) – pad mode, applicable for first few, last few frames (relevant when frame window is used)return_em (
bool
) – return target emitter
- return_em#
Sanity
- class decode.neuralfitter.dataset.SMLMLiveDataset(*, simulator, em_proc, frame_proc, bg_frame_proc, tar_gen, weight_gen, frame_window, pad, return_em=False)[source]#
Bases:
SMLMStaticDataset
- Parameters:
frames (torch.Tensor) – frames. N x H x W
em (list of EmitterSets) – ground-truth emitter-sets
frame_proc – frame processing function
em_proc – emitter processing / filter function
tar_gen – target generator function
weight_gen – weight generator function
frame_window (int) – width of frame window
return_em (bool) – return EmitterSet in getitem method.
- class decode.neuralfitter.dataset.SMLMLiveSampleDataset(*, simulator, ds_len, em_proc, frame_proc, bg_frame_proc, tar_gen, weight_gen, frame_window, return_em=False)[source]#
Bases:
SMLMDataset
Init new dataset.
- Parameters:
em_proc – Emitter processing
frame_proc – Frame processing
bg_frame_proc – Background frame processing
tar_gen – Target generator
weight_gen – Weight generator
frame_window – number of frames per sample / size of frame window
pad – pad mode, applicable for first few, last few frames (relevant when frame window is used)
return_em – return target emitter
- class decode.neuralfitter.dataset.SMLMStaticDataset(*, frames, emitter, frame_proc=None, bg_frame_proc=None, em_proc=None, tar_gen=None, bg_frames=None, weight_gen=None, frame_window=3, pad=None, return_em=True)[source]#
Bases:
SMLMDataset
- Parameters:
frames (torch.Tensor) – frames. N x H x W
em (list of EmitterSets) – ground-truth emitter-sets
frame_proc – frame processing function
em_proc – emitter processing / filter function
tar_gen – target generator function
weight_gen – weight generator function
frame_window (int) – width of frame window
return_em (bool) – return EmitterSet in getitem method.
decode.neuralfitter.de_bias module#
- class decode.neuralfitter.de_bias.UniformizeOffset(n_bins)[source]#
Bases:
object
- Parameters:
n_bins (int) – The bias scales with the uncertainty of the localization. Therefore all detections are binned according to their predicted uncertainty.
bins. (Detections within different bins are then rescaled seperately. This specifies the number of) –
- forward(x)[source]#
Rescales x and y offsets (inplace) so that they are distributed uniformly within [-0.5, 0.5] to correct for biased outputs. Forward frames through post-processor. :rtype:
Tensor
- Parameters:
x (torch.Tensor) – features to be converted. Expecting x/y coordinates in channel index 2, 3 and x/y sigma coordinates in channel index 6, 7 expected shape \((N, C, H, W)\)
decode.neuralfitter.em_filter module#
Here we provide some filtering on EmitterSets.
- class decode.neuralfitter.em_filter.EmitterFilter[source]#
Bases:
ABC
- abstract forward(em)[source]#
Forwards a set of emitters through the filter implementation
- Parameters:
em (
EmitterSet
) – emitters- Return type:
- class decode.neuralfitter.em_filter.NoEmitterFilter[source]#
Bases:
EmitterFilter
The no filter
- class decode.neuralfitter.em_filter.PhotonFilter(th)[source]#
Bases:
EmitterFilter
- Parameters:
th – (int, float) photon threshold
- class decode.neuralfitter.em_filter.TarEmitterFilter(tar_ix=0)[source]#
Bases:
EmitterFilter
- Parameters:
tar_ix – (int) index of the target frame
decode.neuralfitter.frame_processing module#
- class decode.neuralfitter.frame_processing.AutoCenterCrop(px_fold)[source]#
Bases:
FrameProcessing
Automatic cropping in centre. Specify pixel_fold which the target frame size must satistfy and the frame will be center-cropped to this size.
- Parameters:
px_fold (
int
) – integer in which multiple the frame must dimensioned (H, W dimension)
- class decode.neuralfitter.frame_processing.AutoPad(px_fold, mode='constant')[source]#
Bases:
AutoCenterCrop
Pad frame to a size that is divisible by px_fold. Useful to prepare an experimental frame for forwarding through network.
- Parameters:
px_fold (
int
) – number of pixels the resulting frame size should be divisible bymode (
str
) – torch mode for padding. refer to docs of torch.nn.functional.pad
- class decode.neuralfitter.frame_processing.Mirror2D(dims)[source]#
Bases:
FrameProcessing
Mirror the specified dimensions. Providing dim index in negative format is recommended. Given format N x C x H x W and you want to mirror H and W set dims=(-2, -1).
- Parameters:
dims (
Tuple
) – dimensions
decode.neuralfitter.losscollection module#
decode.neuralfitter.post_processing module#
- class decode.neuralfitter.post_processing.ConsistencyPostprocessing(*, raw_th, em_th, xy_unit, img_shape, ax_th=None, vol_th=None, lat_th=None, p_aggregation='pbinom_cdf', px_size=None, match_dims=2, diag=0, pphotxyzbg_mapping=[0, 1, 2, 3, 4, -1], num_workers=0, skip_th=None, return_format='batch-set', sanity_check=True)[source]#
Bases:
PostProcessing
- Parameters:
pphotxyzbg_mapping –
raw_th –
em_th –
xy_unit –
img_shape –
ax_th –
vol_th –
lat_th –
p_aggregation –
px_size –
match_dims –
diag –
num_workers –
skip_th – relative fraction of the detection output to be on to skip post_processing. This is useful during training when the network has not yet converged and major parts of the detection output is white (i.e. non sparse detections).
return_format –
sanity_check –
- forward(features)[source]#
Forward the feature map through the post processing and return an EmitterSet or a list of EmitterSets. For the input features we use the following convention:
0 - Detection channel
1 - Photon channel
2 - ‘x’ channel
3 - ‘y’ channel
4 - ‘z’ channel
5 - Background channel
Expecting x and y channels in nano-metres.
- Parameters:
features (torch.Tensor) – Features of size \((N, C, H, W)\)
- Returns:
Specified by return_format argument, EmitterSet in nano metres.
- Return type:
EmitterSet or list of EmitterSets
- classmethod parse(param, **kwargs)[source]#
Return an instance of this post-processing as specified by the parameters
- Parameters:
param –
- Returns:
ConsistencyPostProcessing
- sanity_check()[source]#
Performs some sanity checks. Part of the constructor; useful if you modify attributes later on and want to double check.
- skip_if(x)[source]#
Skip post-processing when a certain condition is met and implementation would fail, i.e. to many bright pixels in the detection channel. Default implementation returns False always.
- Parameters:
x – network output
- Returns:
returns true when post-processing should be skipped
- Return type:
bool
- class decode.neuralfitter.post_processing.LookUpPostProcessing(raw_th, xy_unit, px_size=None, pphotxyzbg_mapping=(0, 1, 2, 3, 4, -1), photxyz_sigma_mapping=(5, 6, 7, 8))[source]#
Bases:
PostProcessing
- Parameters:
raw_th (
float
) – initial raw thresholdxy_unit (
str
) – xy unit unitpx_size – pixel size
pphotxyzbg_mapping (
Union
[list
,tuple
]) – channel index mapping of detection (p), photon, x, y, z, bg
- class decode.neuralfitter.post_processing.NoPostProcessing(xy_unit=None, px_size=None, return_format='batch-set')[source]#
Bases:
PostProcessing
- Parameters:
return_format (str) – return format of forward function. Must be ‘batch-set’, ‘frame-set’. If ‘batch-set’
call (one instance of EmitterSet will be returned per forward) –
one (if 'frame-set' a tuple of EmitterSet) –
returned (per frame will be) –
sanity_check (bool) – perform sanity check
- class decode.neuralfitter.post_processing.PostProcessing(xy_unit, px_size, return_format)[source]#
Bases:
ABC
- Parameters:
return_format (str) – return format of forward function. Must be ‘batch-set’, ‘frame-set’. If ‘batch-set’
call (one instance of EmitterSet will be returned per forward) –
one (if 'frame-set' a tuple of EmitterSet) –
returned (per frame will be) –
sanity_check (bool) – perform sanity check
- abstract forward(x)[source]#
Forward anything through the post-processing and return an EmitterSet
- Parameters:
x (
Tensor
) –- Returns:
Returns as EmitterSet or as list of EmitterSets
- Return type:
EmitterSet or list
- skip_if(x)[source]#
Skip post-processing when a certain condition is met and implementation would fail, i.e. to many bright pixels in the detection channel. Default implementation returns False always.
- Parameters:
x – network output
- Returns:
returns true when post-processing should be skipped
- Return type:
bool
- class decode.neuralfitter.post_processing.SpatialIntegration(raw_th, xy_unit, px_size=None, pphotxyzbg_mapping=(0, 1, 2, 3, 4, -1), photxyz_sigma_mapping=(5, 6, 7, 8), p_aggregation='norm_sum')[source]#
Bases:
LookUpPostProcessing
- Parameters:
raw_th (
float
) – probability threshold from where detections are consideredxy_unit (
str
) – unit of the xy coordinatespx_size – pixel size
pphotxyzbg_mapping (
Union
[list
,tuple
]) – channel index mappingphotxyz_sigma_mapping (
Union
[list
,tuple
,None
]) – channel index mapping of sigma channelsp_aggregation (
Union
[str
,Callable
]) – aggreation method to aggregate probabilities. can be ‘sum’, ‘max’, ‘norm_sum’
decode.neuralfitter.sampling module#
decode.neuralfitter.scale_transform module#
- class decode.neuralfitter.scale_transform.AmplitudeRescale(scale=1.0, offset=0.0)[source]#
Bases:
object
- Parameters:
offset (
float
) –scale (float) – reference value
- class decode.neuralfitter.scale_transform.FourFoldInverseOffsetRescale(*args, **kwargs)[source]#
Bases:
InverseOffsetRescale
Assumes scale_x, scale_y, scale_z to be symmetric ranged, scale_phot, ranged between 0-1
- Parameters:
scale_x (float) – scale factor in x
scale_y – scale factor in y
scale_z – scale factor in z
scale_phot – scale factor for photon values
mu_sig_bg – offset and scaling for background
buffer – buffer to extend the scales overall
power – power factor
- class decode.neuralfitter.scale_transform.InverseOffsetRescale(*, scale_x, scale_y, scale_z, scale_phot, mu_sig_bg=(None, None), buffer=1.0, power=1.0)[source]#
Bases:
OffsetRescale
Assumes scale_x, scale_y, scale_z to be symmetric ranged, scale_phot, ranged between 0-1
- Parameters:
scale_x (float) – scale factor in x
scale_y (
float
) – scale factor in yscale_z (
float
) – scale factor in zscale_phot (
float
) – scale factor for photon valuesmu_sig_bg – offset and scaling for background
buffer – buffer to extend the scales overall
power – power factor
- class decode.neuralfitter.scale_transform.InverseParamListRescale(phot_max, z_max, bg_max)[source]#
Bases:
ParameterListRescale
Rescale network output trained with GMM Loss.
- class decode.neuralfitter.scale_transform.OffsetRescale(*, scale_x, scale_y, scale_z, scale_phot, mu_sig_bg=(None, None), buffer=1.0, power=1.0)[source]#
Bases:
object
Assumes scale_x, scale_y, scale_z to be symmetric ranged, scale_phot, ranged between 0-1
- Parameters:
scale_x (float) – scale factor in x
scale_y (
float
) – scale factor in yscale_z (
float
) – scale factor in zscale_phot (
float
) – scale factor for photon valuesmu_sig_bg – offset and scaling for background
buffer – buffer to extend the scales overall
power – power factor
- class decode.neuralfitter.scale_transform.ParameterListRescale(phot_max, z_max, bg_max)[source]#
Bases:
object
- class decode.neuralfitter.scale_transform.SpatialInterpolation(mode='nearest', size=None, scale_factor=None, impl=None)[source]#
Bases:
object
- Parameters:
mode (string, None) – mode which is used for interpolation. Those are the modes by the torch interpolation
function –
impl (optional) – override function for interpolation
decode.neuralfitter.target_generator module#
- class decode.neuralfitter.target_generator.DisableAttributes(attr_ix)[source]#
Bases:
object
Allows to disable attribute prediction of parameter list target; e.g. when you don’t want to predict z.
- Parameters:
attr_ix (
Union
[None
,int
,tuple
,list
]) – index of the attribute you want to disable (phot, x, y, z).
- class decode.neuralfitter.target_generator.FourFoldEmbedding(xextent, yextent, img_shape, rim_size, roi_size, ix_low=None, ix_high=None, squeeze_batch_dim=False)[source]#
Bases:
TargetGenerator
- Parameters:
xy_unit – Which unit to use for target generator
ix_low – lower bound of frame / batch index
ix_high – upper bound of frame / batch index
squeeze_batch_dim (
bool
) – if lower and upper frame_ix are the same, squeeze out the batch dimension before return
- forward(em, bg=None, ix_low=None, ix_high=None)[source]#
Forward calculate target as by the emitters and background. Overwrite the default frame ix boundaries.
- Parameters:
em (
EmitterSet
) – set of emittersbg (
Optional
[Tensor
]) – background frameix_low (
Optional
[int
]) – lower frame indexix_high (
Optional
[int
]) – upper frame index
- Return type:
Tensor
- Returns:
target frames
- class decode.neuralfitter.target_generator.ParameterListTarget(n_max, xextent, yextent, ix_low=None, ix_high=None, xy_unit='px', squeeze_batch_dim=False)[source]#
Bases:
TargetGenerator
- Target corresponding to the Gausian-Mixture Model Loss. Simply cat all emitter’s attributes up to a
maximum number of emitters as a list.
- Parameters:
n_max (
int
) – maximum number of emitters (should be multitude of what you draw on average)xextent (
tuple
) – extent of the emitters in xyextent (
tuple
) – extent of the emitters in yix_low – lower frame index
ix_high – upper frame index
xy_unit (
str
) – xy unitsqueeze_batch_dim (
bool
) – squeeze batch dimension before return
- forward(em, bg=None, ix_low=None, ix_high=None)[source]#
Forward calculate target as by the emitters and background. Overwrite the default frame ix boundaries.
- Parameters:
em (
EmitterSet
) – set of emittersbg (
Optional
[Tensor
]) – background frameix_low (
Optional
[int
]) – lower frame indexix_high (
Optional
[int
]) – upper frame index
- Returns:
target frames
- class decode.neuralfitter.target_generator.TargetGenerator(xy_unit='px', ix_low=None, ix_high=None, squeeze_batch_dim=False)[source]#
Bases:
ABC
- Parameters:
xy_unit – Which unit to use for target generator
ix_low (
Optional
[int
]) – lower bound of frame / batch indexix_high (
Optional
[int
]) – upper bound of frame / batch indexsqueeze_batch_dim (
bool
) – if lower and upper frame_ix are the same, squeeze out the batch dimension before return
- abstract forward(em, bg=None, ix_low=None, ix_high=None)[source]#
Forward calculate target as by the emitters and background. Overwrite the default frame ix boundaries.
- Parameters:
em (
EmitterSet
) – set of emittersbg (
Optional
[Tensor
]) – background frameix_low (
Optional
[int
]) – lower frame indexix_high (
Optional
[int
]) – upper frame index
- Return type:
Tensor
- Returns:
target frames
- class decode.neuralfitter.target_generator.UnifiedEmbeddingTarget(xextent, yextent, img_shape, roi_size, ix_low=None, ix_high=None, squeeze_batch_dim=False)[source]#
Bases:
TargetGenerator
- Parameters:
xy_unit – Which unit to use for target generator
ix_low – lower bound of frame / batch index
ix_high – upper bound of frame / batch index
squeeze_batch_dim (
bool
) – if lower and upper frame_ix are the same, squeeze out the batch dimension before return
- forward(em, bg=None, ix_low=None, ix_high=None)[source]#
Forward calculate target as by the emitters and background. Overwrite the default frame ix boundaries.
- Parameters:
em (
EmitterSet
) – set of emittersbg (
Optional
[Tensor
]) – background frameix_low (
Optional
[int
]) – lower frame indexix_high (
Optional
[int
]) – upper frame index
- Return type:
Tensor
- Returns:
target frames
- forward_(xyz, phot, frame_ix, ix_low, ix_high)[source]#
Get index of central bin for each emitter.
- Return type:
Tensor
- property xextent#
- property yextent#
decode.neuralfitter.train_val_impl module#
decode.neuralfitter.weight_generator module#
- class decode.neuralfitter.weight_generator.FourFoldSimpleWeight(*, xextent, yextent, img_shape, roi_size, rim, weight_mode='const', weight_power=None)[source]#
Bases:
WeightGenerator
- Parameters:
xy_unit – Which unit to use for target generator
ix_low – lower bound of frame / batch index
ix_high – upper bound of frame / batch index
squeeze_batch_dim – if lower and upper frame_ix are the same, squeeze out the batch dimension before return
- forward(tar_em, tar_frames, ix_low=None, ix_high=None)[source]#
Calculate weight map based on target frames and target emitters.
- Parameters:
tar_em (EmitterSet) – target EmitterSet
tar_frames (torch.Tensor) – frames of size \(((N,),C,H,W)\)
- Returns:
Weight mask of size \(((N,),D,H,W)\) where likely \(C=D\)
- Return type:
torch.Tensor
- class decode.neuralfitter.weight_generator.SimpleWeight(*, xextent, yextent, img_shape, roi_size, weight_mode='const', weight_power=None, forward_safety=True, ix_low=None, ix_high=None, squeeze_batch_dim=False)[source]#
Bases:
WeightGenerator
- Parameters:
xextent (tuple) – extent in x
yextent (tuple) – extent in y
img_shape (
tuple
) – image shaperoi_size (int) – roi size of the target
weight_mode (str) – constant or phot
weight_power (float) – power factor of the weight
forward_safety (
bool
) – check sanity of forward arguments
- check_forward_sanity(tar_em, tar_frames, ix_low, ix_high)[source]#
Check sanity of forward arguments, raise error otherwise.
- Parameters:
tar_em (
EmitterSet
) – target emitterstar_frames (
Tensor
) – target framesix_low (
int
) – lower frame indexix_high (
int
) – upper frame index
- forward(tar_em, tar_frames, ix_low=None, ix_high=None)[source]#
Calculate weight map based on target frames and target emitters.
- Parameters:
tar_em (EmitterSet) – target EmitterSet
tar_frames (torch.Tensor) – frames of size \(((N,),C,H,W)\)
- Returns:
Weight mask of size \(((N,),D,H,W)\) where likely \(C=D\)
- Return type:
torch.Tensor
- class decode.neuralfitter.weight_generator.WeightGenerator(ix_low=None, ix_high=None, squeeze_batch_dim=False)[source]#
Bases:
TargetGenerator
- Parameters:
xy_unit – Which unit to use for target generator
ix_low (
Optional
[int
]) – lower bound of frame / batch indexix_high (
Optional
[int
]) – upper bound of frame / batch indexsqueeze_batch_dim (
bool
) – if lower and upper frame_ix are the same, squeeze out the batch dimension before return
- check_forward_sanity(tar_em, tar_frames, ix_low, ix_high)[source]#
Check sanity of forward arguments, raise error otherwise.
- Parameters:
tar_em (
EmitterSet
) – target emitterstar_frames (
Tensor
) – target framesix_low (
int
) – lower frame indexix_high (
int
) – upper frame index
- abstract forward(tar_em, tar_frames, ix_low, ix_high)[source]#
Calculate weight map based on target frames and target emitters.
- Parameters:
tar_em (EmitterSet) – target EmitterSet
tar_frames (torch.Tensor) – frames of size \(((N,),C,H,W)\)
- Returns:
Weight mask of size \(((N,),D,H,W)\) where likely \(C=D\)
- Return type:
torch.Tensor