decode.generic package#
Submodules#
decode.generic.emitter module#
- class decode.generic.emitter.CoordinateOnlyEmitter(xyz, xy_unit=None, px_size=None)[source]#
Bases:
EmitterSet
- Parameters:
xyz (
Tensor
) – (torch.tensor) N x 2, N x 3
- class decode.generic.emitter.EmitterSet(xyz, phot, frame_ix, id=None, prob=None, bg=None, xyz_cr=None, phot_cr=None, bg_cr=None, xyz_sig=None, phot_sig=None, bg_sig=None, sanity_check=True, xy_unit=None, px_size=None)[source]#
Bases:
object
Initialises EmitterSet of \(N\) emitters.
- Parameters:
xyz (
Tensor
) – Coordinates of size \((N,3)\)phot (
Tensor
) – Photon count of size \(N\)frame_ix (
LongTensor
) – Index on which the emitter appears. Must be integer type. Size \(N\)id (
Optional
[LongTensor
]) – Identity the emitter. Must be tensor integer type and the same type as frame_ix. Size \(N\)prob (
Optional
[Tensor
]) – Probability estimate of the emitter. Size \(N\)bg (
Optional
[Tensor
]) – Background estimate of emitter. Size \(N\)xyz_cr (
Optional
[Tensor
]) – Cramer-Rao estimate of the emitters position. Size \((N,3)\)phot_cr (
Optional
[Tensor
]) – Cramer-Rao estimate of the emitters photon count. Size \(N\)bg_cr (
Optional
[Tensor
]) – Cramer-Rao estimate of the emitters background value. Size \(N\)xyz_sig (
Optional
[Tensor
]) – Error estimate of the emitters position. Size \((N,3)\)phot_sig (
Optional
[Tensor
]) – Error estimate of the photon count. Size \(N\)bg_sig (
Optional
[Tensor
]) – Error estimate of the background value. Size \(N\)sanity_check (
bool
) – performs a sanity check.xy_unit (
Optional
[str
]) – Unit of the x and y coordinate.px_size (
Union
[tuple
,Tensor
,None
]) – Pixel size for unit conversion. If not specified, derived attributes (xyz_px and xyz_nm) may not be accessed because one can not convert units without pixel size.
- property bg_scr: Tensor#
- static cat(emittersets, remap_frame_ix=None, step_frame_ix=None)[source]#
Concatenate multiple emittersets into one emitterset which is returned. Optionally modify the frame indices by the arguments.
- Parameters:
emittersets (
Iterable
) – iterable of emittersets to be concatenatedremap_frame_ix (
Optional
[Tensor
]) – new index of the 0th frame of each iterablestep_frame_ix (
Optional
[int
]) – step size between 0th frame of each iterable
- Returns:
concatenated emitters
- chunks(chunks)[source]#
Splits the EmitterSet into (almost) equal chunks
- Parameters:
chunks (int) – number of splits
- Returns:
of emittersets
- Return type:
list
- property data: dict#
Return intrinsic data (without metadata)
- dim()[source]#
Returns dimensionality of coordinates. If z is 0 everywhere, it returns 2, else 3. :rtype:
int
Note
Does not do PCA or other sophisticated things.
- eq_attr(other)[source]#
Tests whether the meta attributes (xy_unit and px size) are the same
- Parameters:
other – the EmitterSet to compare to
- Return type:
bool
- filter_by_sigma(fraction, dim=None, return_low=True)[source]#
Filter by sigma values. Returns EmitterSet.
- Parameters:
fraction (
float
) – relative fraction of emitters remaining after filtering. Ranges from 0. to 1.dim (
Optional
[int
]) – 2 or 3 for taking into account z. If None, it will be autodetermined.return_low – if True return the fraction of emitter with the lowest sigma values. if False return the (1-fraction) with the highest sigma values.
- get_subset_frame(frame_start, frame_end, frame_ix_shift=None)[source]#
Returns emitters that are in the frame range as specified.
- Parameters:
frame_start – (int) lower frame index limit
frame_end – (int) upper frame index limit (including)
frame_ix_shift –
Returns:
- static load(file)[source]#
Loads the set of emitters which was saved by the ‘save’ method.
- Parameters:
file (
Union
[str
,Path
]) – path to the emitterset- Returns:
EmitterSet
- property meta: dict#
Return metadata of EmitterSet
- property phot_scr: Tensor#
- populate_crlb(psf, **kwargs)[source]#
Populate the CRLB values by the PSF function.
- Parameters:
psf (PSF) – Point Spread function with CRLB implementation
**kwargs – additional arguments to be parsed to the CRLB method
Returns:
- save(file)[source]#
Pickle save’s the dictionary of this instance. No legacy guarantees given. Should only be used for short-term storage.
- Parameters:
file (
Union
[str
,Path
]) – path where to save
- property single_frame: bool#
Check if all emitters are on the same frame.
- Returns:
bool
- sort_by_frame()[source]#
Sort a deepcopy of this emitterset and return it.
- Returns:
Sorted copy of this emitterset
- split_in_frames(ix_low=0, ix_up=None)[source]#
Splits a set of emitters in a list of emittersets based on their respective frame index.
- Parameters:
ix_low (
int
) – (int, 0) lower boundix_up (
Optional
[int
]) – (int, None) upper bound
- Return type:
list
- Returns:
list
- to_dict()[source]#
Returns dictionary representation of this EmitterSet so that the keys and variables correspond to what an EmitterSet would be initialised.
- Return type:
dict
Example
>>> em_dict = em.to_dict() # any emitterset instance >>> em_clone = EmitterSet(**em_dict) # returns a clone of the emitterset
- property xyz_nm: Tensor#
Returns xyz in nanometres and performs respective transformations if needed.
- property xyz_px: Tensor#
Returns xyz in pixel coordinates and performs respective transformations if needed.
- property xyz_scr: Tensor#
Square-Root cramer rao of xyz.
- property xyz_scr_nm: Tensor#
- property xyz_scr_px: Tensor#
Square-Root cramer rao of xyz in px units.
- property xyz_sig_tot_nm: Tensor#
- property xyz_sig_weighted_tot_nm: Tensor#
- class decode.generic.emitter.EmptyEmitterSet(xy_unit=None, px_size=None)[source]#
Bases:
CoordinateOnlyEmitter
- Parameters:
xyz – (torch.tensor) N x 2, N x 3
- class decode.generic.emitter.LooseEmitterSet(xyz, intensity, ontime, t0, xy_unit, px_size, id=None, sanity_check=True)[source]#
Bases:
object
- Parameters:
xyz (torch.Tensor) – coordinates. Dimension: N x 3
intensity (torch.Tensor) – intensity, i.e. photon flux per time unit. Dimension N
t0 (torch.Tensor, float) – initial blink event. Dimension: N
ontime (torch.Tensor) – duration in frame-time units how long the emitter blinks. Dimension N
id (torch.Tensor, int, optional) – identity of the emitter. Dimension: N
xy_unit (string) – unit of the coordinates
- return_emitterset()[source]#
Returns EmitterSet with distributed emitters. The ID is preserved such that localisations coming from the same fluorophore will have the same ID.
- Returns:
EmitterSet
- property te#
- class decode.generic.emitter.RandomEmitterSet(num_emitters, extent=32, xy_unit='px', px_size=None)[source]#
Bases:
EmitterSet
Initialises EmitterSet of \(N\) emitters.
- Parameters:
xyz – Coordinates of size \((N,3)\)
phot – Photon count of size \(N\)
frame_ix – Index on which the emitter appears. Must be integer type. Size \(N\)
id – Identity the emitter. Must be tensor integer type and the same type as frame_ix. Size \(N\)
prob – Probability estimate of the emitter. Size \(N\)
bg – Background estimate of emitter. Size \(N\)
xyz_cr – Cramer-Rao estimate of the emitters position. Size \((N,3)\)
phot_cr – Cramer-Rao estimate of the emitters photon count. Size \(N\)
bg_cr – Cramer-Rao estimate of the emitters background value. Size \(N\)
xyz_sig – Error estimate of the emitters position. Size \((N,3)\)
phot_sig – Error estimate of the photon count. Size \(N\)
bg_sig – Error estimate of the background value. Size \(N\)
sanity_check – performs a sanity check.
xy_unit (
str
) – Unit of the x and y coordinate.px_size (
Optional
[tuple
]) – Pixel size for unit conversion. If not specified, derived attributes (xyz_px and xyz_nm) may not be accessed because one can not convert units without pixel size.
- decode.generic.emitter.at_least_one_dim(*args)[source]#
Make tensors at least one dimensional (inplace)
- Return type:
None
decode.generic.process module#
- class decode.generic.process.Identity[source]#
Bases:
ProcessEmitters
- class decode.generic.process.RemoveOutOfFOV(xextent, yextent, zextent=None, xy_unit=None)[source]#
Bases:
ProcessEmitters
Processing class to remove emitters that are outside a specified extent. The lower / left respective extent limits are included, the right / upper extent limit is excluded / open.
- Parameters:
xextent – extent of allowed field in x direction
yextent – extent of allowed field in y direction
zextent – (optional) extent of allowed field in z direction
xy_unit – which xy is considered
decode.generic.slicing module#
- decode.generic.slicing.ix_split(ix, ix_min, ix_max)[source]#
Splits an index rather than a sliceable (as above). Might be slower than splitting the sliceable because here we can not just sort once and return the element of interest but must rather return the index.
- Parameters:
ix (torch.Tensor) – index to split
ix_min (int) – lower limit
ix_max (int) – upper limit (inclusive)
- Returns:
list of logical(!) indices
- decode.generic.slicing.split_sliceable(x, x_ix, ix_low, ix_high)[source]#
Split a sliceable / iterable according to an index into list of elements between lower and upper bound. Not present elements will be filled with empty instances of the iterable itself.
This function is mainly used to split the EmitterSet in list of EmitterSets according to its frame index. This function can also be called with arguments x and x_ix being the same. In this case you get a list of indices
out which can be used for further indexing.
- Parameters:
x – sliceable / iterable
x_ix (torch.Tensor) – index according to which to split
ix_low (int) – lower bound
ix_high (int) – upper bound
- Returns:
list of instances sliced as specified by the x_ix
- Return type:
x_list
decode.generic.test_utils module#
- decode.generic.test_utils.file_loadable(path, reader=None, mode=None, exceptions=None)[source]#
Check whether file is present and loadable. This function could be used in a while lood and sleep
- Return type:
bool
Example
- while not file_loadable(path, …):
time.sleep()
- decode.generic.test_utils.open_n_hash(file)[source]#
Check SHA 256 hash of file
- Parameters:
file (
Union
[str
,Path
]) –- Return type:
str
- Returns:
str
- decode.generic.test_utils.same_weights(model1, model2)[source]#
Tests whether model1 and 2 have the same weights.
- Return type:
bool
- decode.generic.test_utils.tens_almeq(a, b, prec=1e-08, nan=False)[source]#
Tests if a and b are equal (i.e. all elements are the same) within a given precision. If both tensors have / are nan, the function will return False unless nan=True.
- Parameters:
a (
Tensor
) – first tensor for comparisonb (
Tensor
) – second tensor for comparisonprec (
float
) – precision comparisonnan (
bool
) – if true, the function will return true if both tensors are all nan
- Return type:
bool
- Returns:
bool